Loading...
Loading...

Most of the products I have built started with a problem I encountered as a user or a gap I spotted while doing research. AdjustmentScore started differently. It started with a conversation at home.
My wife holds a Master's in Counseling Psychology with a specialisation in Marriage and Family Life. She works with clients professionally and has spent years studying the instruments and frameworks that underpin structured psychological assessment. One of those frameworks is the Multidimensional Adjustment Battery (MAB), a clinical instrument that measures psychological adjustment across twenty distinct scales.
At some point in one of our conversations, she mentioned that the tools available for administering assessments like the MAB were either designed for institutional use at institutional prices, difficult for practitioners working independently to access, or so clinically dense that they were impractical for clients who did not have a psychology background. There was a gap between what the academic framework could offer and what was actually accessible to the people who needed it.
I heard that and started asking questions. What would a well-designed digital version look like? Who would actually use it? Could it be built in a way that was clinically sound but accessible to a non-specialist? And could it sustain itself as a product rather than requiring ongoing subsidisation?
The answers to those questions became AdjustmentScore.
Psychological self-assessment has a paradox at its centre.
The instruments that have been most rigorously validated, that have years of clinical research behind them, that produce results you can actually rely on, are almost entirely locked inside academic institutions and professional practice settings. The tools that are widely available to individuals (the online quizzes, the personality tests, the well-being check-ins) are often built on much thinner foundations, designed more for engagement than for accuracy.
There is a real population of people who fall in between those two worlds.
Professionals who want structured self-insight outside of a formal clinical context. Individuals going through significant life transitions who want something more substantive than a ten-question quiz but do not need or cannot access full clinical intervention. Practitioners who want to offer their clients a structured starting point before a first session.
For that population, the options were poor. The institutional tools were inaccessible. The consumer tools were unreliable. Nothing in between combined clinical validity with accessibility and a price point that made sense for an individual.
That was the gap that the AdjustmentScore was built to fill.
The Multidimensional Adjustment Battery is a clinical assessment instrument that measures psychological adjustment across twenty scales. These scales cover areas including emotional adjustment, social adjustment, occupational functioning, family relationships, and several others that together give a structured picture of how a person is functioning across the key domains of their life.
What makes the MAB clinically useful is its structure. It does not ask you how you feel in a general sense. It asks specific, targeted questions across each domain, uses validated scoring logic to interpret your responses, and produces results that can be compared against established norms.
That comparison against norms is what separates a clinical instrument from a general wellness quiz; it situates your results in the context of how other people in similar circumstances tend to respond.
The MAB also uses reverse scoring on a subset of items. Reverse scoring means that for certain questions, a high response actually indicates a lower level of adjustment in that domain. This is standard in validated psychological instruments. It exists to catch response bias, the tendency some people have to agree with statements regardless of their experience.
Getting reverse scoring right is not optional. Getting it wrong produces results that are not just inaccurate but potentially misleading in ways that matter when you are dealing with someone's psychological state.
My wife brought the academic framework. She understood the MAB, its scales, its scoring logic, and its clinical context. My job was to turn that framework into a product that worked reliably for the people who needed it.
The first and most technically demanding part of the build was implementing the scoring engine correctly.
Twenty scales. Multiple items per scale. Reverse scoring on a subset of those items. Norm comparisons that required storing reference data and calculating where a respondent's score sat relative to that reference population.
I built the scoring engine before building anything else, and I tested it against known outputs until the results matched what the clinical framework predicted they should produce. My wife reviewed every scale and every item to verify that the logic reflected how the instrument was designed to work.
This was not a place for approximation. A scoring error in a quiz about favourite films has no consequences. A scoring error in a psychological assessment tool produces results that a person might use to understand their own mental health. The standard has to be different.
The reverse scoring implementation in particular required careful attention. Each reversed item had to be flagged in the data structure, and the scoring function had to handle the inversion before aggregating the scale total. Getting that right across twenty scales, with different subsets of reversed items in each, was painstaking work. I ran the engine against every possible edge case I could construct before I was confident enough to move on.
The pay-per-report model created a specific problem that most digital products do not have to solve: how do you make sure that someone who pays for a report actually gets the report they paid for, and that someone who did not pay cannot access it?
This sounds simple until you think through the attack vectors. A shared link. A bookmarked results URL. A completed assessment that someone tries to submit twice. A payment that fails after the assessment has already been completed.
I built a token-based validation system to handle this. When a user starts an assessment, they receive a unique session token. That token is tied to their payment status. Completing the assessment without a valid paid token does not produce a report; it produces a payment prompt.
Once payment is confirmed via Stripe, the token is marked as paid, and the report is generated. The token is single-use. A URL that produces a report for one person cannot produce a report for someone else who tries to access the same URL later.
Building this correctly required thinking through the user experience at every stage: the user who pays and completes the assessment in sequence, the user who completes the assessment before paying, the user whose payment fails partway through, and the user who tries to share their results link.
Each of those states needed a response that was both technically correct and felt reasonable to the person experiencing it.
The third hard problem was the report itself.
A raw score across twenty psychological scales is not useful to most people. A number that tells you your "emotional adjustment score is 43" means nothing without context. The report had to translate clinical outputs into language that was meaningful to someone without a psychology degree while remaining faithful to what the instrument actually measures.
My wife and I went through multiple iterations of the report format. The final version shows each scale result with a brief interpretation, situates the result relative to the norm population in plain language, and flags areas that fall significantly below the norm with additional context. It does not diagnose anything, and the report is explicit about that, but it gives the reader a structured starting point for understanding their own functioning across the domains the MAB covers.
Getting that balance right, clinically honest but not clinically overwhelming, took more iterations than I expected. The language had to be precise enough to be meaningful and plain enough to be accessible.
Those two requirements pull in opposite directions, and finding the point where they meet is a design problem, not a technical one.
The monetisation model for AdjustmentScore is deliberately simple. You pay $7.99. You complete the assessment. You receive your report.
That simplicity at the surface level required some careful plumbing underneath. The flow had to handle payment confirmation before report generation, not after.
It had to handle failed payments gracefully without losing the user's assessment progress. It had to provide clear feedback at each step so the user always knew where they were in the process.
The Stripe integration itself was straightforward. I have wired Stripe into enough products at this point that the mechanics are familiar. The harder part was designing the state machine around the payment: what the user sees before paying, what happens during payment processing, what they receive on success, and what they see if something goes wrong.
Every state had to feel considered. A user who is about to pay $7.99 for a psychological assessment report is in a different emotional state than a user browsing an AI tools directory. The stakes feel higher. The trust requirement is higher. The interface and the flow had to reflect that.
AdjustmentScore is a live, accessible psychological self-assessment platform built on the MAB framework.
The full twenty-scale assessment runs in the browser. The scoring engine handles reverse scoring, norm comparisons, and scale aggregation correctly. The Stripe payment flow works. Reports are generated automatically on payment confirmation. The anti-fraud token system prevents unauthorised access to paid reports.
The platform was built as a solo technical project using a clinical framework that my wife validated at every stage of development.
Current users include individuals seeking structured self-insight, professionals who use it as a starting point with clients, and people navigating significant life transitions who want something more substantive than a general wellness resource.
Clinical accuracy is a non-negotiable constraint, not a quality setting.
In most products, you can ship something that is 80% right and iterate toward better. In a psychological assessment tool, the scoring logic either reflects the clinical framework correctly or it does not. There is no partial credit. The discipline of building to that standard — testing exhaustively, checking against known outputs, having a domain expert validate the logic — is something I carried forward into how I think about data integrity in every product I build since.
When the person you are building with is also the domain expert, the collaboration looks different.
My wife was not a client. She was not a stakeholder giving me requirements. She was the person who understood what the instrument was supposed to do and why, and my job was to make the technology faithful to that understanding.
That relationship, builder and domain expert, not builder and client, produced a better product than either of us could have produced alone. It also required a different kind of communication: less about features and more about intention.
The pay-per-report model is honest in a way that subscription models are not.
I chose pay-per-report over subscription because the value is delivered in a single interaction. A person takes the assessment, receives the report, and has what they came for.
Charging them \(7.99 for that transaction is an honest exchange. Charging them \)9.99 a month for ongoing access to something they might only ever use once would not be.
The model also creates a clean user experience. There is no account to manage, no subscription to remember, and no renewal email. You pay, you get your report, you are done. That simplicity was a product decision as much as a business model decision.
The immediate priority for AdjustmentScore is growing awareness among the practitioners and professionals who are most likely to use it and recommend it — counsellors, therapists, coaches, and the individuals they work with.
The longer-term direction involves additional assessment instruments. The MAB is one framework. Others address different aspects of psychological functioning, and the platform infrastructure that runs AdjustmentScore could support them. That would require the same kind of careful clinical collaboration that produced the first version, and the same non-negotiable standard for scoring accuracy.
There is also a professional version in the roadmap, a tier designed for practitioners who want to administer the assessment to multiple clients, review results across a caseload, and export data in formats that fit their practice workflows. That version has different requirements and a different pricing model, and it is not the immediate next step, but it is where the product can grow.
The honest answer is that AdjustmentScore was harder to build than most of the other products in my portfolio. The clinical accuracy requirement, the trust level required by the subject matter, and the collaborative process with a domain expert, all of which made this more demanding than a directory or a content platform.
I built it anyway because the problem was real, the potential impact was meaningful, and the collaboration with my wife made it possible to do it properly rather than approximately.
Building products that are easy is how you get better at building. Building products that are hard is how you find out what you are actually capable of. AdjustmentScore sits in the second category. That is the kind of project I want on my record.
Adeyemi Adetilewa is a product builder, content strategist, and digital marketer. AdjustmentScore is live at adjustmentscore.com. You can read more about his work at adeyemiadetilewa.com.
Work With Me
I help B2B SaaS companies, startups, and digital businesses build content systems, rank organically, and ship products that generate revenue. Open to contract, consulting, and full-time engagements.
The Digital Strategy Newsletter
Practical insights on SEO, AEO, content strategy, and product building. Free, every week.
Free. View archive. Cancel any time.

There is a particular kind of frustration that comes from knowing a tool exists but not being able to find it. In 2023, I was doing research for a client in the B2B SaaS space. They needed to know wha

Most product stories have a clean beginning. A problem, a build, a launch, a lesson. IdeasPlusBusiness.com does not have that shape because it has been running for 11 years, and nothing runs for 11 ye

I have been building things online since 2013. In that time, I have used WordPress more times than I can count for client sites, for my own platforms, and for content-heavy publications like IdeasPlus