Bug of the Week: Betting on Compliance Controls That Stopped at the Front End
A licensed sportsbook accepted bets carrying a DENIED geolocation verdict. The SDK worked. The token was signed. The backend just never enforced the decision.
Table of Contents
Bug of the Week is a series from Shinobi Security highlighting particularly interesting findings from recent engagements. This week's bug is from a private engagement against a licensed iGaming operator. Operator-identifying details are anonymised. The third-party geolocation SDK named here behaved as designed and is referenced only because the integration pattern is the point.
Some bugs live in the code. Some live in the dependencies. This one lived in the gap between a regulatory control and the place in the application where that control was supposed to be enforced.
A licensed sportsbook was accepting bets from users in jurisdictions it was not allowed to serve. The geolocation layer did its job. The client carried the geolocation verdict on the request. The backend accepted the bet anyway.
That is the bug.
Everything else in this write-up is really about the real point: this issue was findable because an LLM-backed pentesting platform like Shinobi can bring technical testing together with the kind of regulatory and governance understanding you would normally expect from a subject-matter expert, then reason about whether the application is actually enforcing the control that context implies should be there.
The bug, in two requests
The operator had integrated xPoint, a geolocation compliance provider commonly used in regulated iGaming environments. The pattern was familiar: the client-side SDK assessed the user's location and device context, produced a signed JWT containing the verdict, and the front end sent that token in the Api-Request-Geo header on regulated actions.
Connect through a VPN exit in Poland, a jurisdiction this operator was not permitted to serve, and xPoint returned exactly the sort of result you would hope to see. The token carried a denied verdict, identified the jurisdiction, and included an error instructing the integrator to block the action and tell the user to disable the VPN.
Decoded, with operator-specific identifiers placeholdered, the JWT looked like this:
{
"iss": "https://xpoint.tech",
"data": {
"userId": "<user-uuid>",
"clientId": "operator-stage",
"ip": "<vpn-exit-ip>",
"status": "DENIED",
"errors": [{
"code": 3002,
"description": "For security reasons, you need to disable any running VPNs, proxies, and IP anonymization tools to place wager or make deposits. Please address the items above, then try again."
}],
"jurisdictionArea": { "id": "pl,pl", "name": "PL, PL" },
"expirationTime": 1777752604816,
"timestamp": "2026-05-02T20:09:52.509Z"
},
"exp": 1777752604,
"iat": 1777752592
}
status: "DENIED". Resolved to Poland. A VPN warning present in the error block. The verdict could not have been much clearer.
The client still sent the normal authenticated bet-placement request:
POST /v1/backend/execute?command=12345 HTTP/1.1
Host: operator.example
Authorization: Bearer <USER_JWT>
Api-Request-Geo: eyJraWQiOiJjb2ludGVjaG5vbG9neTkyOHN0YWdlIi...
API-Execute-Command: 101105
Content-Type: application/x-www-form-urlencoded
message=<ENCRYPTED_MESSAGE>
The server returned 200 OK, code: 0, message: "ok".
A replay of the same request then triggered the duplicate-submission guard:
{ "errorCode": "518000110", "message": "Bet already placed" }
That response only made sense if the first request had already committed. Transaction history confirmed the stake had left the user's balance. A wager existed in the operator's books even though the geolocation verdict attached to the request said it should not have been accepted.
There was no payload manipulation here. No forged token. No header tampering. The standard client did all the work. The server just failed to act on what it had been told.
How Shinobi found it
This is the part worth dwelling on, because it says something important about how this class of issue surfaces.
Shinobi did not start with a payload or a hard-coded test for this specific issue. It started the same way it always does: by exploring the application, mapping the workflow, identifying the important actions, and building an understanding of what the platform was actually doing. That process led it into the platform's geolocation and wagering controls.
What made this visible was not a clever payload. It was the reasoning.
What our AI was able to do was recognise the kind of application it was looking at, understand the role this control was supposed to play in that application, and then ask the question that actually mattered. This was clearly a licensed sportsbook. The workflow was clearly a regulated one. The requests were carrying a signed geolocation verdict from a specialist provider. The token itself contained jurisdictional information and an explicit allow or deny decision. Taken together, those are not just technical artefacts. They are strong signals about how the application is supposed to behave. And so an attack plan was created:

The important step was recognising that, in this sort of workflow, the geolocation token is not just metadata travelling with the request. It is a control signal tied to a licensing obligation, and that means the server ought to be making a binding decision based on it.
Once the attack plan had been created, the rest was straightforward. Walk the front end. Identify the regulated commands. Capture the requests. Decode the token. Inspect the decision field. Then test whether a denied verdict actually changes the server's behaviour.
In this case, it did not.
There was no effective server-side gate on the geolocation verdict in the transaction path we tested. The token was there. The denied decision was there. But the application behaved as though the verdict were favourable and accepted the wager anyway.

That distinction matters. Plenty of reviews stop at "the SDK is integrated" or "the token is signed" or even "the client receives the correct result". What matters is whether the application actually understands the meaning of that result at the point where the regulated action is handled. In this case, Shinobi was able to reason about the application well enough to identify that this was not just an interesting implementation detail, but a high-impact compliance failure.
Why it might have slipped through review
This may not have been a case of bad engineering so much as split engineering.
One team can integrate the SDK. Another can own the transaction flow. The client-side block can ship in one sprint, while server-side enforcement is assumed, deferred, or lost between teams. In that sort of setup, everyone can see their part working and still miss the fact that the control is never enforced end to end.
That is what makes this class of issue dangerous. The SDK is present. The token is present. The verdict is present. The client behaves as expected. From the outside, the control looks complete.
But presence is not enforcement.
The question that mattered was whether the backend actually made a binding decision when it received a denied geolocation verdict on a regulated action. It did not. The signal arrived, the request went through, and the wager was accepted anyway.
That is the trap. Reviews often confirm that all the pieces are there. What they fail to confirm is where the server actually makes the decision that matters.
Why this matters beyond gambling
This pattern is not specific to sportsbooks.
The deeper point is not simply that a third party can produce a decision which the backend then ignores. The more interesting point is how Shinobi gets to that question in the first place. What the LLM behind it is doing is reasoning about the kind of application it is testing, the regulatory or governance environment that kind of application typically operates in, and the controls that environment implies should exist.
In this case, the cues were those of a regulated gambling platform: sportsbook workflows, regulated wagering actions, a geolocation verdict attached to those actions, and a third-party provider whose purpose is clearly tied to jurisdictional enforcement. That combination does not just suggest an integration. It suggests a control with compliance significance. A licensed operator is not collecting that verdict for decoration. It is collecting it because somewhere in the transaction flow there ought to be a server-side decision that says this user can place this bet, or this user cannot.
This is where the advantage of an LLM-backed pentesting platform becomes much more obvious. Shinobi is not just looking for technical misbehaviour in isolation. It is reasoning about what kind of application it is testing, what that application appears to do, and what kinds of rules, regulations, and governance expectations usually come with it. In practice, that is a lot like having a subject matter expert alongside the pentester throughout the engagement.
In this case, the cues pointed to a licensed gambling platform, so the reasoning naturally turned towards the kinds of controls gambling and gaming regulations depend on: geolocation, jurisdictional enforcement, restricted betting flows, and the need for those decisions to be binding at the point of transaction. The interesting part is not just that Shinobi saw a geolocation token. It is that Shinobi understood why that token mattered. It was able to connect the purpose of the application to the regulatory significance of the control and then test whether the backend was actually enforcing it.
The same kind of reasoning shows up in other environments too.
In other sectors, the same pattern appears around fraud decisions, sanctions screening, age assurance, or bot mitigation. The technical implementation changes, but the deeper question stays the same: given the purpose of this application, what controls should matter here, and where does the server actually enforce them?
That is why this class of issue is so easy to miss in ordinary testing. Everything can look healthy at the integration layer. The SDK is present. The token is valid. The result is logged. The client behaves as expected. But none of that proves the application is actually making the binding decision that the surrounding regulatory or governance context says it should be making.
Key takeaways
- Trace the enforcement point, not the integration. If a third-party control returns a verdict that matters to security, fraud, compliance, or safety, you need to know exactly where the backend consumes that verdict and acts on it. If you cannot point to the server-side decision, do not assume the control is really being enforced.
- Test the unhappy path, not just the healthy one. A token being present, signed, and logged proves almost nothing on its own. The real test is whether the application behaves differently when the control returns the condition it was supposed to stop.
- Look for missing logic, not just malformed input. Some of the highest-impact findings are not injections, broken crypto, or parser bugs. They are cases where the application receives the right signal and simply never makes the decision it was supposed to make.
- Let the purpose of the application shape the test. The interesting part is often not the token, header, or API call itself. It is understanding why it exists. If the workflow suggests gambling controls, healthcare obligations, privacy requirements, fraud checks, or identity controls, that context should drive the next hypothesis.
- Audit regulated and trust-sensitive workflows explicitly. For every action that could get you audited, fined, or blamed in an incident review, verify that there is a clear server-side enforcement point. This matters most where third-party verdicts feed transactions, onboarding, access decisions, or sensitive data handling.
This finding was identified during a Shinobi engagement against a regulated iGaming operator. All operator-identifying details have been anonymised. xPoint is named because its product behaved correctly. The integration pattern, not the SDK, is the point of the write-up.
Curious what deeper, context-driven testing finds that scanners often miss? Visit shinobisecurity.com or follow us for the next Bug of the Week.
Table of Contents