LoginGet Started

Cal.com Is Going Closed Source: What Its AI Security Argument Means for Open Source SaaS

Thu Nghiem

Thu

AI SEO Specialist, Full Stack Developer

Cal.com going closed source

Cal.com just did the thing a lot of open source SaaS founders quietly fear they will have to do one day.

After five years of shipping in the open, Cal.com announced it’s moving its core product to closed source. The reason they put front and center is not licensing drama or “we need to monetize harder”, it’s security. Specifically, AI assisted vulnerability discovery making it easier to find, chain, and exploit weaknesses in a widely deployed scheduling product.

If you build SaaS, or you operate an AI product that touches customer data, this is worth sitting with. Not because Cal.com is uniquely right or wrong, but because it’s a clean case study in how the open source trust bargain is changing.

What Cal.com actually announced (and what is changing)

Cal.com’s own post is pretty direct: they’re closing the source of the core product going forward, because they believe the risk profile has shifted and they’re on the hook for customer data protection in a way that is getting harder, faster, with AI in the mix. Their official rationale is here: Cal.com goes closed source: why.

A few important details get lost when this story gets summarized as “another open source company sells out”.

  1. This is about the core product going forward. The practical effect is that the canonical Cal.com you run for serious business use is no longer transparently auditable by default.
  2. They are keeping an open option alive. Cal.diy remains as a path for hobbyists and self hosters who want something open. It’s not “open source is dead”, it’s more like “open source is being moved to a side track”.
  3. The company is explicitly tying this to AI and security. That’s what makes this different from the usual licensing fights. They’re claiming the attacker economics changed.

That last part is the headline. And it’s the part founders should interrogate carefully, because it could be the new default excuse. Or it could be the first widely visible example of a real inflection point.

The AI vulnerability argument, in plain terms

The core claim is basically this:

Open code plus AI tools equals a faster path from “someone notices a bug pattern” to “someone produces a working exploit that hits real customers”.

That claim is not crazy. AI is good at the boring middle of exploitation now. Not magic, not one prompt RCE, but the grind:

  • reading unfamiliar codebases quickly
  • spotting suspicious flows and missing checks
  • generating test cases and payload variations
  • turning half a clue into a reproducible PoC
  • scaling the search across many repos and versions

A human still has to aim it, but the cost per attempt is falling. That means more attempts. More coverage. More “good enough” attackers.

The New Stack’s write up frames it similarly and adds more context on the security angle: Cal.com codebase security and AI.

So the argument is credible at a high level. But the real question is the one founders actually care about.

Does closing the source meaningfully reduce risk for a SaaS product?

Security reality check: closed source is not a shield, but it does change the game

There’s a tired line in security circles: “security through obscurity is not security”. True, but incomplete.

Obscurity is not a strategy, but it can be a layer. And layers matter.

Here’s the practical breakdown for SaaS:

1. Your biggest attack surface is usually not the repo

If you run a hosted SaaS, the most common compromise paths often look like:

  • auth misconfigurations and session issues
  • IDORs and access control bugs
  • insecure webhooks and OAuth edges
  • dependency vulnerabilities
  • cloud and CI/CD missteps
  • leaked keys, overly broad tokens
  • supply chain problems

Many of those can be found from the outside with black box testing. No source required. AI helps there too, by the way.

So if a company thinks closing the repo “solves” security, that’s wishful thinking.

2. But open code does reduce attacker cost in specific ways

For certain classes of vulnerabilities, having the code is a huge accelerant:

  • subtle authorization logic errors that are hard to infer from responses
  • “only exploitable with this exact sequence” issues
  • edge cases in custom crypto, token signing, invite flows
  • multi tenant boundary mistakes where you need to see the model relationships
  • bespoke business logic vulnerabilities

With source, an attacker can reason precisely. With AI, they can do it faster. Closing the source doesn’t eliminate these bugs, but it raises the cost of discovery and exploitation. Sometimes that is enough to shift your risk from “will be exploited this quarter” to “probably not”.

3. Open source can improve defense, but only if the incentives are real

The best argument for open source security has always been “more eyes”. The honest counterpoint is “more eyes who get paid”. Most repos do not have that.

If a project is popular but the security program is thin, open code can become a vulnerability buffet. And AI widens the gap between “casual reviewer” and “motivated attacker”. Attackers only need one hit. Defenders need sustained process.

So yes. Cal.com’s logic holds up in one specific sense: AI makes it easier to extract value from open code, and the defensive upside of openness is not automatic.

But.

There’s a trade. Cal.com is now taking on a different problem set: trust, verifiability, and community energy.

What stays open with Cal.diy, and why that matters

Keeping Cal.diy open is more than a consolation prize. It’s Cal.com trying to preserve a channel for:

  • developers who want to tinker
  • smaller self hosted deployments
  • community contributions that are not directly tied to the hardened enterprise surface

In other words, Cal.diy is a pressure valve.

This structure also resembles a pattern we’re going to see more:

  • “Open core” in marketing
  • Closed source for the production grade thing enterprises actually run
  • A separate repo that stays open for experiments, examples, and goodwill

If you’re a founder, it’s worth naming what this is. It is segmentation. Not just pricing segmentation. Risk segmentation.

The open version is allowed to be less of a liability.

The community reaction you should expect (and why it’s rational)

When an open source SaaS closes, reactions tend to cluster into a few predictable buckets.

1. The trust crowd

These users valued Cal.com because they could audit it, fork it, and not worry about sudden lock in. For them, “we closed for security” is often heard as “we closed because we can”. Some will leave on principle. Not loudly, just quietly.

2. The pragmatists

Some users do not care about the license as long as uptime is good and the roadmap ships. If the hosted product is strong, they stay. If Cal.diy stays viable, some will self host that and move on.

3. The fork makers

A non trivial number of developers will try to fork the last open version and maintain it. Most forks die. A few don’t. The fork threat is usually less about code and more about momentum and distribution.

4. The security professionals

This group is split. Some will say closing reduces scrutiny and increases risk long term. Others will say “threat model matters, and attackers have gotten cheaper”. Both can be true.

If you’re operating a SaaS, the important part is not the discourse. It’s the second order effect: contributors and integrators decide whether you are still a platform worth building around.

That impacts GTM more than most founders expect.

Why “AI makes open source unsafe” is only half the story

There’s a more uncomfortable possibility here.

AI does not just make attackers better. It also makes defenders better, if you build the muscle for it.

  • AI assisted code review can catch whole categories of mistakes earlier.
  • LLM powered security tooling can triage findings and map them to real exploitability.
  • Fuzzing and property based testing get easier to scale.
  • Security regression tests are faster to generate.
  • Threat modeling and abuse case enumeration can be partially automated.

So the key variable is not “AI exists”. It’s whether your org is operating like a security engineering org.

A lot of open source SaaS companies are still operating like product companies that happen to have public repos. That was survivable when the attacker cost was higher. It’s less survivable now.

In that sense, Cal.com’s move is a signal: some teams would rather change the distribution model than build a much heavier security machine.

That’s not cowardice. It’s a business decision. But it has consequences.

What this means for open source SaaS defensibility in 2026

If you are a founder, you’ve probably already felt the squeeze from both sides:

  • Open source makes adoption easier, forks easier, and monetization weird.
  • Closed source makes trust harder, especially when your users are developers.

AI adds a third pressure: security economics.

So defensibility shifts away from “the code is the moat” and toward things like:

  • operational excellence and uptime
  • compliance posture (SOC 2, ISO, HIPAA, etc.)
  • distribution and partnerships
  • data network effects (careful, this can backfire)
  • product velocity and UX polish
  • enterprise features that are not just toggles, but processes (audit logs, SCIM, SSO, DLP, retention policies)
  • strong security program that you can communicate clearly

One thing to note. If you close your source but you cannot clearly explain your security posture, you may end up with the worst of both worlds: less community trust and still a fragile surface.

Practical lessons for founders and product operators (even if you stay open)

A few takeaways that are not ideological, just practical.

1. Decide what “open” is buying you, in one sentence

Is it distribution? Trust? Contributors? Ecosystem? Hiring? If you can’t state it, you’re not managing it.

Then measure it. Contributors per month. Integrations shipped by third parties. Self hosted conversions. Brand lift. Anything.

2. Treat “AI makes attackers faster” as a requirement, not an excuse

Assume your repo will be scanned by AI models. Assume your API will be fuzzed by agents. Assume your auth flows will be hammered by automation that doesn’t get tired.

That should change how you ship:

  • stricter code review gates for auth and multi tenant boundaries
  • mandatory security regression tests for past incidents
  • dependency and container scanning in CI
  • shorter patch cycles
  • real WAF and rate limiting tuned to abuse cases, not just traffic
  • secrets management that is boring and locked down

3. If you’re going to close, over communicate the trust replacement

Open source is a trust mechanism. If you remove it, you need a substitute:

  • published security program and response SLAs
  • clear vulnerability disclosure policy
  • bug bounty that is not symbolic
  • third party audits where possible
  • transparent changelogs, especially for security fixes
  • documented data handling practices

Basically, replace “you can read the code” with “you can verify the process”.

4. Keep an open lane, if your users are developers

Cal.diy is a good example of a compromise that preserves some developer goodwill. If you can keep SDKs, clients, terraform modules, docs, examples, even parts of the UI open, you stay buildable.

Developer trust is not binary. It’s cumulative.

5. Plan for the fork narrative, even if the fork never wins

When you close source, someone will fork the last version. Your messaging should anticipate that without picking a fight.

Be calm. Be factual. Emphasize support, security response, hosted reliability, and a clear roadmap. Most customers do not want to be their own security team.

A quick note on communicating this stuff publicly (without making it worse)

The “AI security” argument will be used more often now. Some companies will mean it. Some will use it as cover for a business model pivot.

The difference is in the specifics.

If you’re making a similar change, you need to be able to say:

  • what threat model changed
  • what incidents or near misses informed the decision (without leaking exploit details)
  • what your customers gain, concretely
  • what remains open, if anything
  • what your long term commitment is to self hosting, APIs, and portability

Vague statements about “AI is scary now” won’t hold up with technical buyers.

Where Junia.ai fits in (if you’re writing about sensitive product moves)

If you’re a founder or operator who has to explain technical decisions publicly, you already know the hard part is not writing words. It’s writing the right words, with the right nuance, fast.

If you’re turning something like this into a blog post, release note, or FAQ, tools can help. Junia AI is built for long form, structured content and keeping it consistent with your voice, which matters when you’re trying to avoid sounding like a press release. Two features that are actually relevant here:

  • Internal linking for credibility and navigation, especially when you’re writing multi page security or policy updates. Junia’s AI internal linking tool is one way to keep readers moving through the context instead of bouncing.
  • If you’re worried about sounding synthetic during a sensitive announcement, it’s worth running drafts through an AI detection pass as a gut check. Junia has an AI detector for that.

Not because “AI detection” is some absolute truth. More because tone matters, and you want your message to land like a human wrote it. Because a human did.

The bottom line

Cal.com is betting that closing its core source code reduces real world risk in an era where AI makes vulnerability discovery cheaper and faster. That bet is not obviously wrong.

But it’s also not a free win. Closing source trades one kind of safety for another kind of fragility: less public scrutiny, more reliance on internal process, and a possible trust hit with the exact developer community that helped it grow.

If you run open source SaaS in 2026, the lesson is not “close now”. It’s this:

The old bargain, “open means safer because more eyes”, only works if you have a security engine that can keep up with the new kind of eyes. AI just raised the baseline.

Frequently asked questions
  • Cal.com is shifting its core product to closed source primarily due to security concerns. With advancements in AI-assisted vulnerability discovery, attackers can more easily find and exploit weaknesses in widely deployed software. Cal.com believes that closing the source reduces the risk of such exploits affecting customer data.
  • Closing the source code does not guarantee complete security, but it changes the attacker's cost structure. While many vulnerabilities can be found through external testing without source access, having closed source raises the difficulty for attackers to discover subtle bugs, especially when AI tools accelerate exploit development. However, security requires multiple layers beyond just obscurity.
  • AI accelerates vulnerability discovery by efficiently analyzing unfamiliar codebases, identifying suspicious logic flows, generating test cases, and producing reproducible proofs of concept. This lowers the cost and increases the scale of attacks, enabling attackers to find and exploit bugs faster than before.
  • Cal.com is maintaining an open-source version called Cal.diy aimed at hobbyists and self-hosters who want an open scheduling solution. This approach preserves a channel for developers interested in tinkering while moving the core business-focused product to closed source for enhanced security.
  • Moving to closed source shifts challenges towards trust, verifiability, and community energy. While it may reduce some risks related to attacker exploitation via open code, it also limits transparency and community contributions that are hallmarks of open-source projects. Cal.com aims to balance this by keeping Cal.diy open-source.
  • Not necessarily. Cal.com's decision highlights evolving security dynamics influenced by AI-assisted attacks but doesn't mean open source is dead. Instead, it suggests a nuanced landscape where some projects may choose hybrid models or adjust openness based on risk profiles and business needs. Open source remains valuable but must be managed carefully with strong security practices.