G GATE MEDIA spoke with Max Tesla, Co-founder & CEO of Blask, about why the company launched Blask Awards as an alternative to traditional iGaming awards — with no applications, no fees, and “data as the jury.” Below is the full translation of the original interview.
How algorithms choose winners — and why data matters more than trophies
G GATE MEDIA: Your team has been on the other side of the awards process: nominations, shortlists, and a high-profile win at SiGMA Pitch Asia 2024. Back then, you were evaluated by real people — a jury. How did that experience influence the decision to create your own award? Was there a moment when you were on stage (or preparing your application) and thought: “This is great, we won — but the evaluation system itself needs to change”?
Max Tesla: I think these are two parallel stories that just rhyme nicely.
Over the last few years, the market has taken a huge step forward, and often the mechanics of pitching and selecting a winner no longer match the scale of the industry: you’re competing not with the market, but with those who bothered to submit an application.
Winning SiGMA Pitch Asia 2024 wasn’t the trigger for Blask Awards. Our award wasn’t created as a replacement for everything that exists — it was built as a fully alternative entity.
The decision sits more in the realm of product and brand. For us, AI isn’t a “gimmick for hype” — it’s the foundation of how Blask exists at all: we see the market as a whole, we can find all operators, and we can measure their momentum, efficiency, and potential.
In most awards you compete with those who filled out a form. In Blask Awards you compete with everyone at once — with the entire market, even if you didn’t apply anywhere or talk to anyone. That, in my view, is an honest league.

G GATE MEDIA: Any marketing director knows what hell it is to apply for awards: fill in forms, pay fees, write a polished essay about why you’re the best. Is your “no applications” decision an attempt to solve this market pain — or simply a consequence of the fact that you already have all the data? How exhausting was the awards-application process for you personally?
Max Tesla: I won’t pretend the process of submitting an application and then tracking it is fast, simple, or intuitive 🙂
But credit where it’s due — lately it has become much more transparent, at least in some cases.
Still, the main problem isn’t time or transparency. Filling in an application is about subjectivity. What exactly does a company bring as “evidence”? Which numbers can you show, which can’t you? Which cases, quotes, screenshots, examples? It’s all behind a curtain: no one reveals their successful pitch decks.
And in the end, you can almost never answer honestly: why did Company A win, not Company B? Because “the pitch was stronger”? Because the booth was bigger? Because someone bought ten pages of media support? Because the essay was better? All of that could be true — and still have nothing to do with real market performance.
In theory, this kind of dispute is resolved in Blask Awards in 30 seconds: you can open the card of any winner or participant and verify it in the product. You don’t even need to be a customer — basic information is available for free. Though of course the full picture is more visible with a subscription.
G GATE MEDIA: In traditional awards, brands apply, prepare cases, worry, and then post about participation on social media and in blogs. In your case, the winner might not even know they’re “participating.”
What happens if a company wins “Operator of the Year” but says: “We’re not interested; we don’t work with you”? Will you still give them the award? How will it work physically? Do you plan to give winners a detailed report explaining why they won? For a business, understanding the reasons behind success can be more valuable than the trophy.
Max Tesla: That’s the beauty of “data is the jury.”
The award can’t be transferred to anyone other than the winner — not because judges liked them more, but because the metrics don’t reach the top line. Everything is objective.
Custom reports for each winner or nomination aren’t planned yet. But the methodology for selecting winners and detailed descriptions of the metrics are publicly available.
In that sense, Blask Awards is a bit different from other awards. We recognise results brands have already achieved — and they don’t need to do anything else for it. It’s pure recognition of merit.
G GATE MEDIA: Your terms say: “Only regulated markets.”
But we all understand that a huge share (if not most) of iGaming money is made in the grey zone or in markets with “formal” licences. What’s behind this decision? Risk for you? Risk for operators? Not enough data?
Max Tesla: We’re not pretending the grey zone doesn’t exist. We know it’s enormous. Still, we don’t want to encourage ambiguous interpretations of Blask Awards by regulators, media, and other market participants.
Why only regulated markets? Two reasons.
First: methodology and comparability. In unregulated or semi-regulated markets, the conditions of the game differ too much: taxes, advertising limits, KYC requirements, access to payment rails, distribution channels, media rules.
If you compare a fully regulated market with an unregulated one, you’re not comparing “brand visibility” — you’re comparing freedom to bypass restrictions. In an award context that becomes an unfair matchup: those with fewer restrictions get a “free boost” in metrics simply because they carry less regulatory load.
Second: data and verifiability. Regulated markets have more anchor points: regulator reports, public sources, clearer market boundaries, less “noise.” This allows us to evaluate more accurately — and defend the result if someone comes along and says: “Show me why.”
📌 Read also: Blask’s path to a novel metrics breakdown framework for local and international iGaming brands
G GATE MEDIA: “Data is the jury” sounds great, but any analyst will ask: “Where does the data come from?”
How do you guarantee validity? How does the system distinguish organic brand growth from bot manipulation or incentivised traffic that has no LTV? If we’re talking facts, how deeply can your software “see” into an operator’s internal engine?
Max Tesla: The foundation of our methodology is Share of Search: the share of branded search queries as a proxy for market share. We calculate the Blask Index for each brand and use it as a leading indicator — an early signal of shifting interest and market share.
To prevent “industry tricks” from distorting the picture, we don’t measure domains — we measure brands. Mirrors and clones are merged into a single profile, and queries are filtered by intent — this removes noise and spikes that don’t reflect real demand.

🔍 Read also: Share of Search: a new metric for understanding market volume
As for manipulation and incentivised traffic: we have our own algorithms and response protocols that help us evaluate situations objectively. If any of our metrics for a brand begins to multiply rapidly without clear reasons (atypical seasonality, divergence in query clusters, and other signals), we review those spikes manually.
If there’s no logical explanation, the system flags an anomaly. It’s not 100% automated yet — but we’re moving in that direction.
And yes: we don’t have access to operators’ internal data (CRM, LTV, payments).
G GATE MEDIA: You highlight your own unique metrics: BAP, Blask Index, and CEB. What algorithms are behind them? Why should the market accept these metrics as a standard?
Max Tesla:
- Blask Index is an industry interest index — an aggregated demand signal across brands within a country market.
- BAP reflects what share of market power/attention a brand holds in a selected market. In meaning, it’s close to market share, but not based on “after-the-fact” sales — it’s based on the current external picture of brand presence, which helps you understand future demand potential.
- CEB is an external revenue benchmark in dollars: how much a brand, on average, should earn given its market strength and the current competitive landscape. It’s not accounting GGR and not an attempt to guess your reporting — it’s a reference point for comparison and diagnostics.
Why should the market accept this as a standard? It doesn’t “have to.” It should have the ability to understand and verify the logic.
Historically, the industry measures success either via internal financial numbers (which no one discloses) or via fragmented marketing metrics that are poorly comparable across countries and brands.
We also repackaged the old eFTD/eGGR into APS and CEB, because the familiar abbreviations sound like accounting and create the wrong expectations: people start comparing a model to P&L and then get disappointed not in the data, but in the name. APS and CEB are honestly labelled as baselines — not “what you had,” but “what should be achievable” given your market strength.
🚀 Read also: Leading the shift: ushering in APS & CEB for a new era of brand performance
In the end, it’s not a set of separate numbers — it’s a connected framework: brand strength + user acquisition potential + revenue potential, giving the industry a shared benchmarking language without access to internal systems.
The “standard” here isn’t “truth forever,” but a universal comparison point that reduces friction and helps people make decisions faster.
G GATE MEDIA: Usually awards are a business: selling tickets, sponsor tables, sponsorship packages, networking access. You’re cutting yourself off from that revenue. What’s your upside? Is this expensive PR for the Blask product, or do you have a long-term strategy to monetise the award status itself?
Max Tesla: Awards also come with costs — venue rental, logistics, promotion, and other line items. We cut ourselves off from those too 🙂
Blask Awards is a large-scale image and brand project. Its goal is to increase awareness of our brand and trust in it. We want to get to a point where more and more industry participants trust not only — and not so much — “gut feeling,” but our data and metrics.
In the future, we might introduce sponsorship integrations into Blask Awards. But that will not affect the mechanics of selecting winners — winners will still be determined using our metrics.
G GATE MEDIA: In the games block, there’s a nomination called “Lobby Legend” (the game featured most often in the lobby).
That’s a powerful insight for casino managers. Does that mean you parse all casinos in real time? How big is your sample for these data to be objective?
Max Tesla: Here’s how we collect data for Games: every day a parser visits operator websites, opens the lobby and key categories, “scrolls” the page, and фиксes which games hold which positions. Essentially, it takes snapshots of the positions of all titles.
Then a computer-vision model recognises the game by its logo and assigns it a lobby position for that operator. The average lobby position across all operators where the game is present becomes the GVR (Game Visibility Rank) metric: the smaller the number, the higher the game sits in the lobby (roughly: “position #1” is the most visible spot).
Scale-wise: right now we recognise 27,000+ games daily (around 30 genres, 100+ providers) and match them against a database of 50,000+ logos. Market leaders are definitely included in the sample, so the data are already representative enough to draw conclusions.
Important: we don’t “invent” what we can’t see. If a site was unavailable — there will be a gap in the chart. And some formats (like live dealer) we don’t measure yet, because they’re harder to recognise reliably.
The next step is to expand GVR beyond the lobby: include placements in the header and other site blocks — all the “non-trivial” placements that also generate attention.
G GATE MEDIA: What would be your success metric for this award? That winners’ positions go up? Or that other awards start adopting your audit methods? What’s your global goal?
Max Tesla: For me, Blask Awards success isn’t “winners go up” on the chart the next day. If the market moves because of a trophy, it wasn’t a market — it was theatre.
A good effect is when people use the result as a reason to test hypotheses: “Why is this brand at the top?”, “What exactly is working for them?”, “Where are we underperforming?” — and only then change product, marketing, partnerships.
The second success metric is that the award becomes verifiable. Not “trust us,” but “open Blask and see what it’s based on.” If in a year the market discussion sounds like “show the data,” not “who paid whom,” then we hit the point.
As for other awards… I don’t need them to “adopt our methods.” The tool isn’t the main thing. The main thing is that a tool exists at all — and that there’s a culture of audit, reproducibility, and transparency.
Our global goal is simple: shift the industry from competition essays and backroom decisions to real metrics. Make honest competition the norm, not a rare stroke of luck.
G GATE MEDIA: Most nominations are framed as “biggest,” “fastest,” or “best.” Aren’t you afraid the same giant brands will win year after year? Will you add nominations for ambitious newcomers or outstanding “smaller” players?
Max Tesla: First, we’re constantly expanding coverage: more countries, more operators, more games. If an operator wins year after year based on objective data, that’s not an award problem. That’s a compliment to the operator. It means they didn’t “get lucky once” — they can sustain performance and adapt.
At the same time, we plan to add new nominations every year to recognise not only established leaders, but also those who just entered the market and are achieving results. Maybe not super flashy yet — but already very important for them.
G GATE MEDIA: Blask is a commercial product that sells analytics. Is there a risk that companies who are already your clients have an advantage? For example, they understand your metrics better and can adapt strategies to your algorithms.
Max Tesla: Yes, Blask clients understand the metric logic faster — because they have access to the platform and can see market dynamics. But that’s not an “advantage in the award” — it’s an advantage in analytical discipline.
Blask Awards is calculated using the same methodology for everyone, with no applications and no manual “tuning” for clients. A subscription doesn’t buy you a place in the ranking.
And one more thing: “adapting to our algorithms” in practice means adapting to the market. We measure external signals of demand and visibility — not metrics you can honestly “inflate” without a real effect. If a brand grows, it means they’re doing something right, not that they guessed a formula.
We provide a top-down view of the industry — but an award (ours, or any award) goes not to the one who sees everything, but to the one who interprets what they see correctly.
G GATE MEDIA: The results aren’t published yet, but you probably already know the winners or at least the contenders. Will there be unusual, unexpected nominees? For example, public opinion and expert conclusions say the best operator is X — but they’re not even on the list, or their position is far from #1. Should we expect that?
Max Tesla: Big brands are top-of-mind for a reason — they invest massive resources in marketing, acquisition, and retention. So you shouldn’t expect huge surprises.
But in some nominations and geos, results may действительно diverge from general expectations.
So keep an eye on the winners this year: