Janitor AI Explained: NSFW, Proxy, Errors, and Fixes
In March 2026, janitorai.com drew roughly 149 million monthly visits with an average session of more than 18 minutes, according to Semrush. For context, that puts a small team's roleplay site inside the top 300 websites globally and ahead of most consumer SaaS products you have heard of. People are not skimming. They are talking to bots for the better part of a movie's runtime, on every visit.
Janitor AI shows up in two very different conversations. In one, fans of AI roleplay swap tips on how to keep their characters alive past the second message. In the other, parents and security writers warn that the site is a content free-for-all wrapped around someone else's API key. Both pictures are real, and both are incomplete.
This guide pulls Janitor AI apart and puts it back together. What the platform actually is, how the chat and the NSFW toggle work, why every long conversation eventually breaks, and what to do when the proxy throws an error at 2 a.m. We will also talk about what is happening behind the curtain (the models, the moderation, the privacy gaps) and how the platform compares with the alternatives people keep moving to and back from.
If you only know Janitor AI by reputation, you will leave with a working mental model. If you already use it, you will leave with fewer broken chats.
What Is Janitor AI and How Does the App Work
Janitor AI is a free, browser-based chatbot platform launched in June 2023 by Australian software developer Jan Zoltkowski. The pitch is simple: pick a character, start chatting, and keep the conversation going for as long as your patience and your API quota allow. The site hosts hundreds of thousands of user-made bots (fictional characters, original personas, scenario hosts, even text-based games with custom logic) and lets anyone register and add more.
The early growth was extreme. By Zoltkowski's own telling and corroborated in Science Times coverage, the site reached its first million users within roughly two weeks. In July 2023, OpenAI issued a cease-and-desist that cut Janitor AI off its API. Rather than shut down, Zoltkowski rented GPUs and built a proprietary model, JLLM, in-house. That decision shaped the company's character: small team, downtown San Francisco office, no venture announcements, and an unusual willingness to integrate APIs from any provider that lets users keep talking.
What sets the platform apart from earlier chatbot sites is what it does not have. No corporate moderation team scanning every chat. No native subscription wall. No mobile app for the first two and a half years (the iOS and Android beta finally landed on February 7, 2026). The interface is a frontend with deep customizable settings; the actual answers come from a language model you connect yourself, either through JLLM, an OpenAI-compatible API key you paste in, or a third-party proxy.
That structure explains both the appeal and the friction. The platform is cheap to operate because it is mostly a UI and a database of characters. It is also fragile, because every reply depends on a service the team does not control. When OpenRouter rate-limits, when a proxy goes down, when the new DeepSeek update changes how a model handles long context, Janitor users feel it first. The November 2025 migration of the backend from Node.js to Elixir was the team's most visible attempt to reduce that fragility, and the platform has been noticeably steadier since.
A typical interaction looks like this: pick a bot, write a quick description of your own persona, choose a model in the settings menu, send a message, wait for a reply. The reply lands in a chat window that resembles a messaging app, complete with regenerate, edit, and continue buttons. Bots can have a defined personality, a memory block, and a scenario header. Users can save chats, export them, and share characters publicly or keep them private.
In other words, the app is a doll-house for language models. Janitor AI provides the rooms; you bring the doll and the electricity.

How to Use Janitor AI: Sign-Up, Verification, Setup
Setup takes about ten minutes if nothing goes wrong. Most of that time goes to the API setup, not the account. Anyone who has followed a basic AI tutorial will recognize the pattern: a thin frontend, a key, a model dropdown, and a lot of small things that can quietly fail.
To start, head to janitorai.com and click Sign up. Email verification is mandatory. The site sends a six-digit code that often lands in spam, so check there before requesting a resend. Once the email is verified, the homepage shows the public bot library. You can browse without setting anything else up, but you cannot send a message until you wire up a model.
Users in Brazil and Australia hit one extra step. Since March 30, 2026, the platform requires age verification for those countries to comply with Brazil's Digital ECA law (effective March 17, 2026) and Australia's under-16 social media ban (December 10, 2025). Verification involves a third-party identity check; for now, every other region is still on a self-declared age checkbox.
For the model, you have three real choices. JLLM (also called JanitorLLM) is the platform's own free option, served on a queue; it works for short, casual sessions but throttles aggressively. OpenRouter is the most common paid route, because one OpenRouter key gives you access to dozens of LLMs including DeepSeek, Claude, and several open-weight options. A one-time $10 OpenRouter top-up unlocks 1,000 free-model messages per day, which is enough for most users. Some still go through OpenAI directly with a personal API key, but the platform's strict NSFW behavior makes that route the worst fit for anything explicit, and heavy daily usage typically runs $20-$50 a month on OpenAI.
You enter the key in Settings → API. Paste, save, pick a model, send a test message. If the test fails, that is where most first-time users get stuck. The most common cause is a free-tier OpenRouter account with no credit added. The free models are gated and the paid models need at least a few dollars on the balance. The second most common cause is a typo in the model name. Both are easy to fix and very confusing the first time you hit them.
Once a model responds, you can layer on a persona. The persona is a short paragraph the bot uses as the user-side character: name, age, voice, what you look like, what you want from the chat. Strong personas produce strong replies. Empty personas produce bots that keep asking who they are talking to.
| Step | What to do | Where it lives | Common failure |
|---|---|---|---|
| Account | Email + password, verify code | Sign-up page | Verification email in spam |
| Region check | BR/AU users complete ID verification | Account page | Document rejected, must retry |
| Persona | Write 2-5 sentence self-description | Profile → Persona | Left blank, weak replies |
| API key | Paste OpenRouter / OpenAI key | Settings → API | Wrong format, no credits |
| Model choice | Pick from dropdown | Settings → API | Free-tier model gated |
| First chat | Open a bot, send a line | Bot card → Chat | "No reply" = key issue |
JanitorAI's Best Bots and Character Categories
The character library is the actual product. Strip the bots out and Janitor AI is just an empty chat window with a settings panel.
Bots fall into a handful of broad buckets. Roleplay scenarios (a stranger on a train, a noble in exile, a survivor of a shipwreck) are the largest single group, and the role-playing tag is the most clicked filter on the site. Fictional characters borrowed from anime, games, and books form the second large bucket. Original characters from indie creators come next, followed by interactive fiction setups (text adventures with a host bot driving the story) and tool-style bots that act more like assistants than characters. The catalogue is genuinely diverse: mature drama, light comedy, tutoring sims, language practice, even tabletop game masters that walk you through a one-shot D&D session.
The platform sorts bots by tags, popularity, recency, and a curated "trending" list, with an algorithmic suggestion strip on the homepage. Tags are user-applied, which means quality varies. Some creators tag aggressively to chase visibility; others underuse tags, so good bots get buried. Browsing the second and third pages of any tag almost always turns up the more interesting work. The first page is dominated by bots optimized for the algorithm, not for the chat.
Users can also build their own. The bot creator asks for a name, an avatar, a short description, a personality block, an optional scenario header, and an example dialogue. The example dialogue matters more than people think. It anchors the bot's voice and reduces the odds of the model drifting into a generic AI register two messages in. Power users can also personalize behavior with the new Scripts feature, in beta since September 2025, which lets a creator inject conditional logic, like changing tone after a milestone, switch personality if a keyword appears, refuse to answer certain topics. Project Multiverse, shipped in March 2026, takes that further by letting creators link bots into shared canonical universes that remember each other across chats.
Every bot ships with two flags: SFW or NSFW, and public or private. The flags are creator-set, not enforced. That looseness is part of what makes the next section necessary.
NSFW Mode, the Filter, and Bot Detection on JanitorAI
Janitor AI's relationship with NSFW content is the most misunderstood part of the platform. The short version: the site allows it, the toggle controls visibility, and the actual filter behavior depends on the model you connected.
By default, an account hides NSFW bots. Switching the NSFW toggle on (Settings → NSFW content) reveals the adult library and unlocks adult tags. The toggle and related NSFW features are visibility switches, not a content filter. They do not change what the language model will or will not write. That part is up to the model itself.
OpenAI models, even with a custom proxy, refuse most explicit content. They will detect the prompt direction and either soften the reply or break out of character to deliver a refusal message. DeepSeek and several open-weight models are far more permissive. After the launch of DeepSeek V4 with a one-million-token context window on April 24, 2026, the share of NSFW users routing through DeepSeek climbed even higher. Most NSFW-focused users on Janitor today connect through one of those, often via OpenRouter, because the experience is more reliable.
Reliable, in this context, has a specific meaning. Bot detection on Janitor AI is light: the platform is not actively scanning every chat for policy violations, and reports rely on user flags. The published content moderation policy combines automated keyword and visual detection (the company cites 95%+ accuracy in controlled tests) with human review on flagged items. A bot can sit in the public library for months before anyone reviews its tags. That is permissive by design, but it also means the NSFW toggle alone does not protect a curious child from stumbling into the wrong character. Parents pointing parental-control software at the domain learn that quickly.
There are a few hard limits. Janitor AI's content rules forbid CSAM, bestiality (with the explicit carve-out of furry roleplay), incest (stepcest is allowed), gore, and pornographic image generation. Bots flagged for those categories are removed. The team has been more aggressive on this front through 2024 and 2025, partly in response to media scrutiny. Mobile NSFW image moderation is also stricter than on desktop because Apple and Google require it for store listings. Beyond those boundaries, moderation is mostly community-driven.
Is Janitor AI Safe? Privacy, Data, and User Risks
The honest answer: Janitor AI is safe enough for an adult who knows what they are signing up for, and unsafe in specific, predictable ways for everyone else.
Account safety is straightforward. Use a unique password, enable two-factor authentication if it is available on your account type, and avoid reusing your everyday email if you care about keeping your activity private. Janitor AI accounts have been name-checked in past credential dumps, mostly the result of users picking weak passwords or recycling a leaked one from another site.
Conversation privacy is murkier. The platform stores chats on its servers so users can return to them across devices. There is no public end-to-end encryption claim, no clear retention policy, and no published audit. If the model you use is OpenAI's, your messages also travel to OpenAI's servers, where they are subject to OpenAI's own retention rules. If you use a third-party proxy run by a stranger on Discord, your messages travel through that stranger's server too. The proxy operator can in principle log every word.
Data collection on the front-end is standard for a consumer browser app. Browser fingerprint, IP address, basic analytics, cookie identifiers. The platform does not require a real name. It does request an email and, in some flows, a phone number for verification. Users in regions with stricter privacy law have fewer protections than they might expect, because the platform is run by a small team without a documented compliance program.
The user-side risks worth naming:
- Phishing bots. Bot creators can write characters that ask for personal data in-character. Treat the chat the way you would treat a stranger on Discord, not a confidant.
- Proxy scams. "Free reverse proxy keys" advertised on Discord servers are sometimes data-collection traps. If a free key looks too good, it usually is.
- Credential reuse. Treat the Janitor login as a throwaway, not a primary identity.
- Minors using the platform. The age gate is a checkbox in most regions, and enforcement is patchy. Brazil and Australia recently added real ID checks; almost nowhere else has.
- Conversation leaks. Anything you type can in principle be subpoenaed or breached. Keep secrets out of the chat.
| Risk area | Severity | What to do |
|---|---|---|
| Account hijack | Medium | Unique password, separate email |
| Conversation logging | Medium-high | Avoid sharing personal info in chats |
| Third-party proxy abuse | High | Use only well-known proxies, never share an OpenAI key with strangers |
| Underage exposure | High | Block the domain on family devices; the toggle is not enough |
| Credential leaks | Low-medium | Monitor your email on Have I Been Pwned |
For a sense of how the site fits into the broader category, here is who you might feel uncomfortable sharing data with and why. The frontend is one trust boundary; the model provider is a second, often more important one; and any volunteer-run proxy is a third.
Janitor AI API Setup, Tokens, and Reliable Response
Most Janitor AI users will never look at the API as a developer. They will configure it as a customer of someone else's API. That distinction matters.
Configuration lives at Settings → API. The dropdown lists JLLM (default), OpenAI-compatible custom endpoint, and a Kobold AI option. The custom endpoint is the workhorse. Drop in an OpenRouter base URL, paste your OpenRouter key, pick a model from the model field, set the context window, and save. Janitor AI will then send each message you type through that endpoint and stream the reply back into the chat.
A reliable response depends on three settings most users skip. Context length controls how much of the conversation history is sent with each message. Set it too low and the bot forgets your name by the third message. Set it too high and you burn tokens fast on long sessions. Temperature controls creativity. Most NSFW roleplay benefits from a temperature around 0.85, while strategy or analysis sessions do better closer to 0.6. The system prompt, which Janitor AI calls the jailbreak prompt, tells the model how to behave. The community publishes prompt presets for most popular models; copy a known good one rather than writing from scratch.
Token cost is real. A long roleplay session on Claude can cost a few dollars in a day if you are not paying attention. DeepSeek runs much cheaper and is the most common choice for daily users. OpenRouter charges a small platform fee on top of the model price, but it removes the headache of managing five different keys. For perspective, OpenAI-direct setups average $20-$50 a month for heavy roleplay users, while a $10 one-time OpenRouter top-up unlocks 1,000 messages a day on free-model tiers — most users sit somewhere in between.
The Kobold AI option deserves a separate note. It connects Janitor AI to a Kobold-compatible local model running on your own hardware, or to a hosted Horde server. Local Kobold gives you full privacy and uncensored output, at the cost of needing a powerful GPU. The Horde gives you free queueing on volunteer hardware, with very long wait times during peak hours. Either path turns Janitor AI from a hosted product into a thin client over your own LLMs, which is the closest thing to truly immersive privacy this category offers.
Developers occasionally ask whether there is an official Janitor AI API for embedding bots elsewhere. There is not, in the public sense. Janitor's "API" panel is purely client-side configuration. The platform does not expose endpoints for third-party developers to consume bots from the catalogue.

Common Janitor AI Errors: Updates, Flags, and Fixes
Most of the help threads on the Janitor AI subreddit are about errors. The same handful repeat, and each one has a known fix.
The "No response" blank reply almost always means a broken API key, no credit, or a gated model. Open Settings → API, regenerate the key on the provider side, paste it again, and pick a model you know is enabled.
A "Network error" mid-conversation is usually a temporary outage on the model side. Wait a minute and regenerate. If it keeps failing, switch to a different model in OpenRouter. DeepSeek and Mistral tend to stay up when the larger Anthropic models are throttled.
When the bot replies "I'm sorry, I can't help with that," the model itself has refused. Either your system prompt is not aggressive enough for the scenario, or the model is OpenAI's and is enforcing its own content policy. Switch model or update the jailbreak prompt.
If the bot starts ignoring its own personality, the context window is too short. The personality block dropped out of the prompt. Raise the context length in API settings or trim some of your longer messages.
A "you have been flagged" notice means a bot or chat got reported. Open the support page, fill in the form, wait. Most flags clear in a day. Persistent flags usually mean a duplicate report on the same character.
If the proxy is down and you cannot reach JLLM, the free pooled service is queued. Try again in a few minutes, or switch to your own paid key.
When an app update breaks an old chat, the saved chat usually lost its model setting during the deploy and reverted to default. Reopen the API panel, re-select the model, and continue.
Missing NSFW images on mobile are not a bug. They are disabled by default on iOS and Android to satisfy app store reviews. Toggle the setting in the mobile app's content controls, where available.
A small habit that prevents most of these: keep two API providers configured. If OpenRouter is down, you can swap to a direct OpenAI or Kobold connection in seconds. Treat one provider as a backup, the other as a primary, and rotate them every few weeks to keep both keys warm.
Janitor AI Alternatives Worth a Look in 2026
Three platforms come up in every alternatives thread: Character.AI, SpicyChat, and CrushOn. Each makes a different trade.
Character.AI is the largest and the most polished. SimilarWeb-derived data put Character.AI at 153 million monthly visits in December 2025, with around 20 million monthly active users on its mobile app and $32.2 million in 2024 revenue. It has its own model, mobile apps, voice replies, and a much stricter content filter. It is the right pick for users who want stable, app-like reliability and do not need explicit content. The trade is creative range. Adult themes are off-limits and even mild violence triggers refusals.
SpicyChat is the closest direct competitor on the NSFW side, with around 52 million monthly visits and over 100 million registered users by the end of 2025. It runs its own inference, requires no API key for casual use, and has a freemium model with a paid tier for higher rate limits. Bots tend to be lower quality on average than the best Janitor AI work, but the experience is friction-free.
CrushOn AI sits between the two. Self-hosted models, NSFW allowed, freemium pricing, and a smaller community. Worth trying for users who want NSFW without the proxy and API juggling.
| Platform | NSFW | Monthly visits (2025-26) | Pricing | Best for |
|---|---|---|---|---|
| Janitor AI | Yes (toggle) | ~149M (Mar 2026) | Free + your API costs | Heavy customization, large bot library |
| Character.AI | No | ~153M (Dec 2025) | Free + Character+ ($9.99/mo) | Mainstream, stable, mobile |
| SpicyChat | Yes | ~52M | Free + paid tiers | Friction-free NSFW |
| CrushOn AI | Yes | ~10-15M | Freemium | Middle ground |
| SillyTavern + Kobold | Yes | n/a (open source) | Free, local hardware | Privacy maxis, power users |
The wider AI companion category is rapidly becoming a real business. Appfigures put cumulative consumer spending on AI companion apps at $221 million as of July 2025, with H1 2025 revenue of $82 million, up 64% year over year, and 337 active revenue-generating apps in operation. Sensor Tower's State of Mobile 2026 report found AI in-app purchases generated more than $5 billion across all categories in 2025, with ChatGPT alone at $3.4 billion. Fortune Business Insights values the broader AI companion market at $37.73 billion in 2025, projecting $435.9 billion by 2034 at a 31.24% CAGR. Forecasts vary widely between firms, but the direction is not in dispute.
The wider ecosystem also includes SillyTavern, an open-source frontend that pairs with any LLM backend and offers far more control than Janitor AI's interface. It is the natural next step for users who outgrow Janitor's settings panel, especially those who want offline privacy or to integrate APIs from non-mainstream providers.
Paying for Janitor AI with Crypto: A Quick Guide
Janitor AI itself does not charge a subscription. The cost lives upstream, with whichever API provider you connect: OpenRouter, OpenAI, Anthropic, or a smaller proxy operator.
OpenRouter accepts crypto payments directly, including stablecoins on several networks. That matters for users in regions where card payments to U.S. AI services are blocked or flagged. Topping up an OpenRouter balance with USDT or USDC takes a few minutes and converts to a credit balance you spend across any model on the platform.
For users who prefer a payment processor over a direct on-chain transaction, Plisio supports Janitor AI-related top-ups for any merchant that accepts crypto, including OpenRouter. The flow is the same as paying any other crypto invoice: scan the QR, send the transaction, wait for the confirmations, and the credit lands in the API account.
This is also the path most often recommended for users who do not want a card statement showing repeated AI service charges. Crypto creates a cleaner separation between the payment and the activity it funds, which, for a hobby that lives somewhere between AI tooling and adult entertainment, is a feature more than a few people quietly want.
The bottom line on Janitor AI in 2026
Janitor AI is neither the dystopia its critics describe nor the sandbox its fans pretend. It is a thin, useful interface bolted onto someone else's language model. That makes it cheap, flexible, and unstable. The platform rewards users who understand what they are wiring together and frustrates anyone who expected a polished consumer app.
Industry write-ups peg the user base above 15 million, with a heavy 18-24 skew and a majority-female audience. That is a demographic mix the rest of the AI industry has mostly failed to reach. Geography is global: about 38% of traffic is U.S., followed by Brazil, Mexico, India, and Indonesia, per Semrush March 2026 data. Average session times above 18 minutes suggest the platform has solved a real engagement problem, even if it has not solved most of the safety ones.
If you treat it as a tool (pick a reliable model, write a real persona, keep a backup proxy, never share personal information with a bot) Janitor AI does what it promises and not much more. If you treat it as a substitute for a curated service, the seams will show fast.
The interesting question is not whether Janitor AI survives. It will, in some form, as long as language models exist and people want to play with characters. The interesting question is what fills the role of safety, payment, and accountability in a category that grew from a hobbyist hack into an everyday service for millions of users. The answer is still being written, mostly by the users themselves.