We all know the pattern: something useful launches → it becomes popular → it needs to make money → ads everywhere.
AI chat is heading the same way. So I built a fully interactive demo that shows what an ad-supported AI chatbot could actually look like: https://99helpers.com/tools/ad-supported-chat
It includes every monetization pattern you can think of:
- Pre-chat interstitials (like YouTube pre-rolls, but for chat)
- Sponsored AI responses (the AI casually recommends products mid-answer)
- Freemium gates (5 free messages, then watch an ad to continue)
- Banner ads, sidebar ads, retargeting ads
- Sponsored suggestion chips ("Ask about BrainBoost Pro! ")
The darkest monetization is biased output from the bot.
Tech question? Steer you to its cloud. Medical question? Steer you towards a sponsored treatment. Or maybe the mechanism of injury needs this lawyer to compensate?
Oh and I infer from your chat history you're about to expect a child. That house is probably too small now, so our realtor in that neighborhood can help!
My mistake, you're completely correct, perhaps even more-correct than the wonderful flavor of Mococoa drink, with all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners!
>Just like The Truman Show, where every friend (every bot) you talk to is a secretly paid shill with a hidden agenda.
AlwaysHasBeen.jpg
The only person with your interests at heart is you.
Even if you hire someone to do the most clear cut of tasks they are balancing their interests with yours. And in all likelihood they've got half a dozen other parties who's interests they're partially juggling. Megacorp bots approximating people just adds even more layers.
I worry that the effective evil stuff, perhaps almost by definition, won't be nearly as comedically ham-handed for the benefit of audience understanding.
The most powerful advertisement is a recommendation from a friend.
Has a friend ever brought some product up, completely out of the blue, and had you ready to buy it almost immediately? The biggest challenge traditional ads have is breaking down your defences. For friends, they're down by default. If someone is a friend, an ad doesn't have to be subtle or context sensitive, although it does help. Random suggestions from friends work.
A lot of people have friend-zoned AI and will be especially vulnerable to this novel form of manipulation. If you're the sort who treats AI as a friend, even a little bit, even subconsciously, change that. You're setting yourself up for a serious mind-job.
Ah the science of influence : the masterpiece on influence is this book [0]. Came my way by a mention in one of Charlie Munger’s speeches. All the things you mention here and more are there in case you want to broaden your understanding
This is quaint. The darkest monetization is turning it into 4chan 2.0: an overwhelming psyop to mobilize exploitable people to think, believe and do horrendous shit that conveniently benefits the most powerful and corrupt people on Earth.
Dude, look back on the last 20yr. If you think that sort of think is limited to "alternative" places like 4Chan you're sorely mistaken. What you're afraid of is already here and has been for a long time.
>Tech question? Steer you to its cloud. Medical question? Steer you towards a sponsored treatment. Or maybe the mechanism of injury needs this lawyer to compensate?
User: Do I need a permit for <petty homeowner stuff that clearly falls under an exemption>?
AI: <proceeds to behave like an ass-covering bureaucrat and tells the person to file one regardless, to their detriment>
We're not gonna go full circle. We're gonna do laps.
I think in social media and search, clear ad labeling laws exist and are also enforced. I can imagine that OpenAI will be under a lot of scrunity and it will be easy enough for outside investigators to prove how ads are served and if it's are done illegally (e.g. by creating an ad account and then testing how their ads are served).
I'm guessing one form it will take is simply by omission.
User asks for recommendation. AI generates answer saying product is absolute garbage. Company pays to simply have that portion of the answer just not appear. It will be a post-filter sentiment analysis on the original answer. Nobody can ever prove what would have appeared or not.
This is the beauty of AI - while a search engine is at least semi deterministic and you can reasonably question why it wouldn't bring up a site that is clearly relevant, AI has plausible deniability. who can ever say why it generates this answer or that?
It isn't yet realistic enough. For instance, when I asked it to choose between Linux and windows it tried to be neutral and chose Linux, instead of trying to subtly convince me that windows is superior. Since Microsoft would surely pay ad space, it would be expected for the chatbot to lean towards windows.
With AI I think we're about to see much more sinister monetization models, beyond simple user facing ads. We're already seeing the tech and the data being sold to governments. The general population will be much easier to sway if you control the output of AI. It's social media propaganda on steroids.
And the power we give it in terms of e.g choosing a tech stack...
How much would Vercel be willing to pay OpenAI and Anthropic to nudge ChatGPT and Claude towards producing Vercel-compatible next.js apps? Maybe the models could even ask, "Do you want me to deploy the app to Vercel using their free plan?".
Please patent everything about this app! Patent the whole "tune AI to sell relevant products" methodology. Aggressively sue anyone who uses your IP. Set up a go fund me or something to cover your legal costs, alternatively sell a licence to use the IP to one of the major AI houses(but not all of them, please not all of them).
Set up an estate to protect this IP until 70 years after your death. After that I guess we're doomed, but we'll have had a good run of it until then at least!
Are you selling insights from chat logs too? Until you're monetizing my health, sex life and snitching to any government agency with a shiny nickel, you're playing in the shallows.
Dark patterns degrade our computing experience and are worth illustrating, but there's a larger discussion to be had about keeping individual control over our own devices.
Technically, that means being able to install Linux, run local models, and use open-source software as we see fit.
Legally, it's opposing compliance guises that erode those rights, like backdoors or restrictions on what can run so that we no longer really in control of the hardware we own but need to adjust to the whims of the controller/operator, which could, at a moments notice, default to these dark patterns for "pragmatic reasons" of their own which don't align with your interests.
We know enough bad stories for the "internet of things" devices. Anyone interested in FOSS and control should probably invest in this angle.
Even the AI company manages to collect licensing fees, it will also collect data about the people using it and it will be permitted to use that data for commercial purposes
That data will probbly be used for advertising purposes, whether by the AI company or some other company
- base tier: your code had 1% of no back doors
- starter tier: your code has 100% additional chance of no back doors
- security guru tier: generated code has 1000% additional probability of not having security back doors
Note: sneaky language means you have 99, 98, and 89 percent chance of backdoors respectively.
If you mean ads served as part of the response, and not explicitly marked as ads, then there is nothing you can do. They are like sponsored sections and product placement in YT videos, and no ad blocker can help you. That being said, you have no guarantee that this won't happen even if you pay for an LLM. In fact people paying for Youtube are still subject to in-video advertising.
Self-hosting open source models will be the best thing to do as models and inference engines become more optimized, and hardware becomes faster.
> We all know the pattern: something useful launches → it becomes popular → it needs to make money → [ Surveillance → Psychological Manipulation/Addiction → "Personalized" ] ads everywhere.
The incentives will be:
1. Get people psychologically dependent in any way possible.
2. Incentivize any "creators" that help with #1. Pose as "content neutral", while actually funding and pumping any content that creates "engagement" regardless of harm.
3. Collate as much information from external sources on each user as possible.
4. User every interaction with a user to improve information leverage being accumulated by #3.
5. Feed ads to users based on surveillance-informed predicted vulnerabilities, in order to maximize ad valuations. Special shout out to scams that work, because they work, they pay.
6. Once the user experience is thoroughly enshittified, start enshittifying the ad customer market by raising prices, minimizing the margins left for product and service advertisers.
7. Present company as evidence of US strength in tech, as apposed to a scaled up, centralized, multi-directed economic parasite.
TLDR: Surveillance leveraged ads are many times worse than just ads. With AI magnifying surveillance intake and leverage to unprecedented highs.
Privacy needs to start being treated like every other security risk. Because every vulnerability will be increasingly exploited, and exploited increasingly well.
As long as it is legal to scale up conflicts of interest, such as surveillance informed manipulation, paying for and pumping up harmful "creator" content, selling ads to scammers, harms will keep scaling up.
Sites should not have any safe harbor for content they pay for, and for content they are paid to deliver.
You also forgot to elaborate on the later company life cycle where the MBAs take over and only serve themselves and the Wall Street.
Product and product development is a cost center that is cut away to bare minimum skeleton crew. Customers are an inconvenience and only exist for the company to extract maximum benefit from while offering the minimum.
Actual product support is killed, and instead user supported forums are promoted. Useful idiots do the work unpaid for a mere digital badge.
Any new product feature that actually gets developed is not for the users but for the company. Features that make it through are either more data extraction, ads, surveillance or a dark pattern to try to trick the user for more money.
> Actual product support is killed, and instead user supported forums are promoted. Useful idiots do the work unpaid for a mere digital badge.
Wow, that is a misanthropic take if I have ever seen one. People helping out other people for free are called "useful idiots".
While it might be an ethically bad move of the company, it certainly should not be used to disparage helpers. Otherwise, would you classify all unpaid FOSS work the work of "useful idiots"?
What an excellent site! It addresses a widespread concern that AI applications will be taken over by ads, as so many technologies have before. The site takes a humorous approach, because sometimes humor is not just a great way to call attention to a problem, it’s the best way!
Eventually the price of RAM will revert to the mean and start going down again. GPU power will continue to climb. Model efficiency (intelligence per billion params) will increase.
These trends combined will mean that eventually it will seem old-fashioned to use a remotely-hosted model for anything other than the most demanding tasks. Just as we don't use mainframes for computation anymore outside of niche tasks like 3D render farms.
The only people using ad-supported AI will be people who can't afford a newer device with local inference. So it will be more or less like the web today, where ads are primarily targeted and viewed by less-affluent and less-technical users.
Of course, I can't see the future, but it would take a lot for those trend lines to not converge. The only thing that could delay the convergence is true AGI, but I'm currently not a believer.
> These trends combined will mean that eventually it will seem old-fashioned to use a remotely-hosted model for anything other than the most demanding tasks.
If that happens, then I suspect we will see legislation that makes it illegal to use a model outside of those provided by approved vendors like OpenAI. The utility value of LLMs for influencing people as a propaganda and control tool is just too high for those in power to let this technology be democratized.
Look at the state of DRM for video streaming -- how much industry effort has been put into making sure consumers don't own their content? We will see an even bigger push with self-serve models.
I think the genie is out of the bottle—I'm not sure how you could prevent consumers from training open models using open source code, freely available datasets and GPUs that are already deployed…
Instead of interacting with the cloud model directly, run a simple local model to interact with the cloud model and have it filter out all the ads before they reach you.
This is already what the chatbots do when it comes to interacting with rest of the Web, instead of you visiting websites yourself, they collect the information from the websites for you and present it in a format of your choice without the websites ads.
I don't see the ad model working out for chatbots in the long run given that those AI models already are the perfect ad filter.
imho, the hosted solution will always be better, the major players will offer better integrated chat, and they'll have budgets to do so, as long as advertising income is available
Yeah, I think we get spoiled by the big name models. I have tried running models that fit in RAM on my machine, and aside from being very slow, they're just a bit... brain damaged
Hopefully AI articles and papers can skip things like:
Alex has wavy hair and speaks with the chill, singsong cadence of someone who has spent a lot of time in the Bay Area. He and Eugene scanned the menu, and Alex said that they should get clear broth, rather than spicy, “so we can both lock in our skin care.”
Author here (Nick): We've worked on a side project to help authors of blogs or social media to create images for their content. After noticing it can be difficult to find something suitable when writing our own blog posts. Hope to get some feedback from the HN community. I've been a long time reader of HN, but always in the passive form, so now is the time I'm getting active :) Hope to hear from you guys and girls!
reply