------
A few important changes to note:
We will no longer provide public Docker images, so your team will need to build the image yourselves.
Please do not use Cal.diy — it’s not intended for enterprise use.
I also replaced Radical with rustical, and I gained free push updates.
https://cal.rs/ and https://github.com/lennart-k/rustical
And if you wanna try it out. https://cal.ache.one/u/ache
Teams, Organizations, Insights, Workflows, SSO/SAML, and other EE-only features have been removed
cal.ws is $630 on Namecheap... the tokens required to build this are cheaper than the domain.There you go, guaranteed community ownership of the code, best face and "good will" as promised by choosing a FOSS license to begin with, and future rug pulls averted.
Seeing it from the other side of the fence: if you see that all contributors are required to cede controlling power into a single hand (except certain Foundations, yadda yadda), it's not proper Open Source in spirit, only in form; and closeups are just a change of mind away.
The thing that's always concerned me with them is questions of "what level of access is required to the system(s) actually hosting my calendar data?" and "if this vendor is compromised, what level of access might an attacker in control of the vendor systems have?" Obviously this will vary by what kind of access controls backends have (e.g. M365, Google Workspace, assorted CRM systems, smaller cloud providers, self-hosted providers, etc.).
Edit: basically, with a lot of these systems, what's expected to be the authoritative data provider/storage?
Maybe I'm being critical but the copy gives me the ick
Edit: I just realised this is by cal.com. I'm leaving my comment intact, if anything it adds to my ick
I am now actively rooting for cal.com to go out of business now as a cautionary tale for any company thinking about taking open source projects proprietary.
FOSS || GTFO
Wow what a 180 from just a year ago when their blog said, "For companies that handle sensitive information, deploying open-source scheduling software on-premises can offer an extra layer of security. Unlike cloud services controlled by external vendors, on-prem installations let teams maintain full ownership of their infrastructure. " ¹
I just cannot trust a company that does a bait and switch like this.
¹ https://cal.com/blog/open-source-scheduling-empower-your-tea...
Disclosure: I'm the CEO of NeetoCal.
Their internal IT infrastructure runs self-hosted OSS wherever possible. I don't think cal.rs is a toy project, they know the perils and headaches of doing open source.
From that page:
> Today, AI can be pointed at an open source codebase and systematically scan it for vulnerabilities.
Yeah, and AI can also be pointed at closed source as soon as that source leaks. The threat has increased for both open and closed source in roughly the same amount.
In fact, open source benefits from white hat scanning for vulnerabilities, while closed source does not. So when there's a vuln in open source, there will likely be a shorter window between when it is known by attackers and when authors are alerted.
I believe that the reason the chose to close the source is just security theater to demonstrate to investors and clients. "Look at all these FOSS projects getting pwned, that's why you can trust us, because we're not FOSS". There is, of course, probably a negative correlation between closing source and security. I'd argue that the most secure operating systems, used in fintech, health, government, etc, got to be so secure specifically by allowing tens or hundreds of thousands of people to poke at their code and then allowing thousands or tens of thousands of people to fix said vulns pro bono.
I'd be interested to see an estimation of the financial value of the volunteer work on say the linux or various bsd kernels. Imagine the cost of PAYING to produce the modern linux kernel. Millions and possibly billions of dollars just assuming average SWE compensation rates, I'd wager.
Too bad cal.com is too short sighted to appreciate volunteers.
Is there such a thing as a closed source program anymore?
Look, tech companies lie all the time to make their bad decisions sound less bad. Simple example: almost every "AI made us more efficient" announcement is really just a company making (unpopular) layoffs, but trying to brand them as being part of an "efficiency effort".
I'd bet $100 this company just wants to go closed source for business reasons, and (just like with the layoffs masquerading as "AI efficiency") AI is being used as the scapegoat.
Yeah, and average kernel devs are not average SWEs
There is no moat anymore.
The only thing new here is the excuse.
> IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
I'm just choosing to focus on the substance of the argument itself, which I think is risible regardless of who makes it and why.
[!WARNING]
Use at your own risk. Cal.diy is the open source community edition of Cal.com and it is intended for users who want to self-host their own Cal.diy instance. It is strictly recommended for personal, non-production use. Please review all installation and configuration steps carefully. Self-hosting requires advanced knowledge of server administration, database management, and securing sensitive data. Proceed only if you are comfortable with these responsibilities.
[!TIP] For any commercial and enterprise-ready scheduling infrastructure, use Cal.com, not Cal.diy; hosted by us or get invited to on-prem enterprise access here: https://cal.com/sales
The community-driven, open-source scheduling platform.
GitHub
Discussions
·
Issues
·
Contributing
Cal.diy is the community-driven, fully open-source scheduling platform — a fork of Cal.com with all enterprise/commercial code removed.
Cal.diy is 100% MIT-licensed with no proprietary "Enterprise Edition" features. It's designed for individuals and self-hosters who want full control over their scheduling infrastructure without any commercial dependencies.
Note: Cal.diy is a self-hosted project. There is no hosted/managed version. You run it on your own infrastructure.
To get a local copy up and running, please follow these simple steps.
Here is what you need to be able to run Cal.diy.
If you want to enable any of the available integrations, you may want to obtain additional credentials for each one. More details on this can be found below under the integrations section.
Clone the repo (or fork https://github.com/calcom/cal.diy/fork)
git clone https://github.com/calcom/cal.diy.git
If you are on Windows, run the following command on
gitbashwith admin privileges:
>git clone -c core.symlinks=true https://github.com/calcom/cal.diy.git
Go to the project folder
cd cal.diy
Install packages with yarn
yarn
Set up your .env file
.env.example to .envopenssl rand -base64 32 to generate a key and add it under NEXTAUTH_SECRET in the .env file.openssl rand -base64 24 to generate a key and add it under CALENDSO_ENCRYPTION_KEY in the .env file.Windows users: Replace the
packages/prisma/.envsymlink with a real copy to avoid a Prisma error (unexpected character / in variable name):# Git Bash / WSL rm packages/prisma/.env && cp .env packages/prisma/.env
Setup Node If your Node version does not meet the project's requirements as instructed by the docs, "nvm" (Node Version Manager) allows using Node at the version required by the project:
nvm use
You first might need to install the specific version and then use it:
nvm install && nvm use
You can install nvm from here.
yarn dx
- Requires Docker and Docker Compose to be installed
- Will start a local Postgres instance with a few test users - the credentials will be logged in the console
yarn dx
Default credentials created:
| Password | Role | |
|---|---|---|
free@example.com |
free |
Free user |
pro@example.com |
pro |
Pro user |
trial@example.com |
trial |
Trial user |
admin@example.com |
ADMINadmin2022! |
Admin user |
onboarding@example.com |
onboarding |
Onboarding incomplete |
You can use any of these credentials to sign in at http://localhost:3000
Tip: To view the full list of seeded users and their details, run
yarn db-studioand visit http://localhost:5555
Add export NODE_OPTIONS="--max-old-space-size=16384" to your shell script to increase the memory limit for the node process. Alternatively, you can run this in your terminal before running the app. Replace 16384 with the amount of RAM you want to allocate to the node process.
Add NEXT_PUBLIC_LOGGER_LEVEL={level} to your .env file to control the logging verbosity for all tRPC queries and mutations.
Where {level} can be one of the following:
0 for silly 1 for trace 2 for debug 3 for info 4 for warn 5 for error 6 for fatal
When you set NEXT_PUBLIC_LOGGER_LEVEL={level} in your .env file, it enables logging at that level and higher. Here's how it works:
The logger will include all logs that are at the specified level or higher. For example: \
NEXT_PUBLIC_LOGGER_LEVEL=2, it will log from level 2 (debug) upwards, meaning levels 2 (debug), 3 (info), 4 (warn), 5 (error), and 6 (fatal) will be logged. \NEXT_PUBLIC_LOGGER_LEVEL=3, it will log from level 3 (info) upwards, meaning levels 3 (info), 4 (warn), 5 (error), and 6 (fatal) will be logged, but level 2 (debug) and level 1 (trace) will be ignored. \echo 'NEXT_PUBLIC_LOGGER_LEVEL=3' >> .env
for Logger level to be set at info, for example.
Click the button below to open this project in Gitpod.
This will open a fully configured workspace in your browser with all the necessary dependencies already installed.
Configure environment variables in the .env file. Replace <user>, <pass>, <db-host>, and <db-port> with their applicable values
DATABASE_URL='postgresql://<user>:<pass>@<db-host>:<db-port>'
Download and install postgres in your local (if you don't have it already).
Create your own local db by executing createDB <DB name>
Now open your psql shell with the DB you created: psql -h localhost -U postgres -d <DB name>
Inside the psql shell execute \conninfo. And you will get the following info.

Now extract all the info and add it to your DATABASE_URL. The url would look something like this
postgresql://postgres:postgres@localhost:5432/Your-DB-Name. The port is configurable and does not have to be 5432.
If you don't want to create a local DB. Then you can also consider using services like railway.app, Northflank or render.
Copy and paste your DATABASE_URL from .env to .env.appStore.
Set up the database using the Prisma schema (found in packages/prisma/schema.prisma)
In a development environment, run:
yarn workspace @calcom/prisma db-migrate
In a production environment, run:
yarn workspace @calcom/prisma db-deploy
Run mailhog to view emails sent during development
NOTE: Required when
E2E_TEST_MAILHOG_ENABLEDis "1"
docker pull mailhog/mailhog
docker run -d -p 8025:8025 -p 1025:1025 mailhog/mailhog
Run (in development mode)
yarn dev
Open Prisma Studio to look at or modify the database content:
yarn db-studio
Click on the User model to add a new user record.
Fill out the fields email, username, password, and set metadata to empty {} (remembering to encrypt your password with BCrypt) and click Save 1 Record to create your first user.
New users are set on a
TRIALplan by default. You might want to adjust this behavior to your needs in thepackages/prisma/schema.prismafile.
Open a browser to http://localhost:3000 and login with your just created, first user.
Seed the local db by running
cd packages/prisma
yarn db-seed
The above command will populate the local db with dummy users.
Be sure to set the environment variable NEXTAUTH_URL to the correct value. If you are running locally, as the documentation within .env.example mentions, the value should be http://localhost:3000.
# In a terminal just run:
yarn test-e2e
# To open the last HTML report run:
yarn playwright show-report test-results/reports/playwright-html-report
Run npx playwright install to download test browsers and resolve the error below when running yarn test-e2e:
Executable doesn't exist at /Users/alice/Library/Caches/ms-playwright/chromium-1048/chrome-mac/Chromium.app/Contents/MacOS/Chromium
Pull the current version:
git pull
Check if dependencies got added/updated/removed
yarn
Apply database migrations by running one of the following commands:
In a development environment, run:
yarn workspace @calcom/prisma db-migrate
(This can clear your development database in some cases)
In a production environment, run:
yarn workspace @calcom/prisma db-deploy
Check for .env variables changes
yarn predev
Start the server. In a development environment, just do:
yarn dev
For a production build, run for example:
yarn build
yarn start
Enjoy the new version.
The Docker image can be found on DockerHub at https://hub.docker.com/r/calcom/cal.diy.
Note for ARM Users: Use the {version}-arm suffix for pulling images. Example: docker pull calcom/cal.diy:v5.6.19-arm.
Make sure you have docker & docker compose installed on the server / system. Both are installed by most docker utilities, including Docker Desktop and Rancher Desktop.
Note: docker compose without the hyphen is now the primary method of using docker-compose, per the Docker documentation.
Clone the repository
git clone --recursive https://github.com/calcom/cal.diy.git
Change into the directory
cd cal.diy
Prepare your configuration: Rename .env.example to .env and then update .env
cp .env.example .env
Most configurations can be left as-is, but for configuration options see Important Run-time variables below.
Required Secret Keys
Before starting, you must generate secure values for NEXTAUTH_SECRET and CALENDSO_ENCRYPTION_KEY. Using the default secret placeholder in production is a security risk.
Generate NEXTAUTH_SECRET (cookie encryption key):
openssl rand -base64 32
Generate CALENDSO_ENCRYPTION_KEY (must be 32 bytes for AES256):
openssl rand -base64 24
Update your .env file with these values:
NEXTAUTH_SECRET=<your_generated_secret>
CALENDSO_ENCRYPTION_KEY=<your_generated_key>
Push Notifications (VAPID Keys) If you see an error like:
Error: No key set vapidDetails.publicKey
This means your environment variables for Web Push are missing.
You must generate and set NEXT_PUBLIC_VAPID_PUBLIC_KEY and VAPID_PRIVATE_KEY.
Generate them with:
npx web-push generate-vapid-keys
Then update your .env file:
NEXT_PUBLIC_VAPID_PUBLIC_KEY=your_public_key_here
VAPID_PRIVATE_KEY=your_private_key_here
Do not commit real keys to .env.example — only placeholders.
Update the appropriate values in your .env file, then proceed.
(optional) Pre-Pull the images by running the following command:
docker compose pull
Start Cal.diy via docker compose
To run the complete stack, which includes a local Postgres database, Cal.diy web app, and Prisma Studio:
docker compose up -d
To run Cal.diy web app and Prisma Studio against a remote database, ensure that DATABASE_URL is configured for an available database and run:
docker compose up -d calcom studio
To run only the Cal.diy web app, ensure that DATABASE_URL is configured for an available database and run:
docker compose up -d calcom
Note: to run in attached mode for debugging, remove -d from your desired run command.
Open a browser to http://localhost:3000, or your defined NEXT_PUBLIC_WEBAPP_URL. The first time you run Cal.diy, a setup wizard will initialize. Define your first user, and you're ready to go!
Note for first-time setup (Calendar integration): During the setup wizard, you may encounter a "Connect your Calendar" step that appears to be required. If you do not wish to connect a calendar at this time, you can skip this step by navigating directly to the dashboard at <NEXT_PUBLIC_WEBAPP_URL>/event-types. Calendar integrations can be added later from the Settings > Integrations page.
Stop the Cal.diy stack
docker compose down
Pull the latest changes
docker compose pull
Update env vars as necessary.
Re-start the Cal.diy stack
docker compose up -d
Clone the repository
git clone https://github.com/calcom/cal.diy.git
Change into the directory
cd cal.diy
Rename .env.example to .env and then update .env
For configuration options see Build-time variables below. Update the appropriate values in your .env file, then proceed.
Build the Cal.diy docker image:
Note: Due to application configuration requirements, an available database is currently required during the build process.
a) If hosting elsewhere, configure the DATABASE_URL in the .env file, and skip the next step
b) If a local or temporary database is required, start a local database via docker compose.
docker compose up -d database
Build Cal.diy via docker compose (DOCKER_BUILDKIT=0 must be provided to allow a network bridge to be used at build time. This requirement will be removed in the future)
DOCKER_BUILDKIT=0 docker compose build calcom
Start Cal.diy via docker compose
To run the complete stack, which includes a local Postgres database, Cal.diy web app, and Prisma Studio:
docker compose up -d
To run Cal.diy web app and Prisma Studio against a remote database, ensure that DATABASE_URL is configured for an available database and run:
docker compose up -d calcom studio
To run only the Cal.diy web app, ensure that DATABASE_URL is configured for an available database and run:
docker compose up -d calcom
Note: to run in attached mode for debugging, remove -d from your desired run command.
Open a browser to http://localhost:3000, or your defined NEXT_PUBLIC_WEBAPP_URL. The first time you run Cal.diy, a setup wizard will initialize. Define your first user, and you're ready to go!
These variables must also be provided at runtime
| Variable | Description | Required | Default |
|---|---|---|---|
| DATABASE_URL | database url with credentials - if using a connection pooler, this setting should point there | required | postgresql://unicorn_user:magical_password@database:5432/calendso |
| NEXT_PUBLIC_WEBAPP_URL | Base URL of the site. NOTE: if this value differs from the value used at build-time, there will be a slight delay during container start (to update the statically built files). | optional | http://localhost:3000 |
| NEXTAUTH_URL | Location of the auth server. By default, this is the Cal.diy docker instance itself. | optional | {NEXT_PUBLIC_WEBAPP_URL}/api/auth |
| NEXTAUTH_SECRET | Cookie encryption key. Must match build variable. Generate with: openssl rand -base64 32 |
required | secret |
| CALENDSO_ENCRYPTION_KEY | Authentication encryption key (32 bytes for AES256). Must match build variable. Generate with: openssl rand -base64 24 |
required | secret |
If building the image yourself, these variables must be provided at the time of the docker build, and can be provided by updating the .env file. Currently, if you require changes to these variables, you must follow the instructions to build and publish your own image.
| Variable | Description | Required | Default |
|---|---|---|---|
| DATABASE_URL | database url with credentials - if using a connection pooler, this setting should point there | required | postgresql://unicorn_user:magical_password@database:5432/calendso |
| MAX_OLD_SPACE_SIZE | Needed for Nodejs/NPM build options | required | 4096 |
| NEXTAUTH_SECRET | Cookie encryption key | required | secret |
| CALENDSO_ENCRYPTION_KEY | Authentication encryption key | required | secret |
| NEXT_PUBLIC_WEBAPP_URL | Base URL injected into static files | optional | http://localhost:3000 |
| NEXT_PUBLIC_WEBSITE_TERMS_URL | custom URL for terms and conditions website | optional | |
| NEXT_PUBLIC_WEBSITE_PRIVACY_POLICY_URL | custom URL for privacy policy website | optional | |
| CALCOM_TELEMETRY_DISABLED | Allow Cal.diy to collect anonymous usage data (set to 1 to disable) |
optional |
If running behind a load balancer which handles SSL certificates, you will need to add the environmental variable NODE_TLS_REJECT_UNAUTHORIZED=0 to prevent requests from being rejected. Only do this if you know what you are doing and trust the services/load-balancers directing traffic to your service.
Certain versions may have trouble creating a user if the field metadata is empty. Using an empty json object {} as the field value should resolve this issue. Also, the id field will autoincrement, so you may also try leaving the value of id as empty.
If you experience this error, it may be the way the default Auth callback in the server is using the WEBAPP_URL as a base url. The container does not necessarily have access to the same DNS as your local machine, and therefore needs to be configured to resolve to itself. You may be able to correct this by configuring NEXTAUTH_URL=http://localhost:3000/api/auth, to help the backend loop back to itself.
docker-calcom-1 | @calcom/web:start: [next-auth][error][CLIENT_FETCH_ERROR]
docker-calcom-1 | @calcom/web:start: https://next-auth.js.org/errors#client_fetch_error request to http://testing.localhost:3000/api/auth/session failed, reason: getaddrinfo ENOTFOUND testing.localhost {
docker-calcom-1 | @calcom/web:start: error: {
docker-calcom-1 | @calcom/web:start: message: 'request to http://testing.localhost:3000/api/auth/session failed, reason: getaddrinfo ENOTFOUND testing.localhost',
docker-calcom-1 | @calcom/web:start: stack: 'FetchError: request to http://testing.localhost:3000/api/auth/session failed, reason: getaddrinfo ENOTFOUND testing.localhost\n' +
docker-calcom-1 | @calcom/web:start: ' at ClientRequest.<anonymous> (/calcom/node_modules/next/dist/compiled/node-fetch/index.js:1:65756)\n' +
docker-calcom-1 | @calcom/web:start: ' at ClientRequest.emit (node:events:513:28)\n' +
docker-calcom-1 | @calcom/web:start: ' at ClientRequest.emit (node:domain:489:12)\n' +
docker-calcom-1 | @calcom/web:start: ' at Socket.socketErrorListener (node:_http_client:494:9)\n' +
docker-calcom-1 | @calcom/web:start: ' at Socket.emit (node:events:513:28)\n' +
docker-calcom-1 | @calcom/web:start: ' at Socket.emit (node:domain:489:12)\n' +
docker-calcom-1 | @calcom/web:start: ' at emitErrorNT (node:internal/streams/destroy:157:8)\n' +
docker-calcom-1 | @calcom/web:start: ' at emitErrorCloseNT (node:internal/streams/destroy:122:3)\n' +
docker-calcom-1 | @calcom/web:start: ' at processTicksAndRejections (node:internal/process/task_queues:83:21)',
docker-calcom-1 | @calcom/web:start: name: 'FetchError'
docker-calcom-1 | @calcom/web:start: },
docker-calcom-1 | @calcom/web:start: url: 'http://testing.localhost:3000/api/auth/session',
docker-calcom-1 | @calcom/web:start: message: 'request to http://testing.localhost:3000/api/auth/session failed, reason: getaddrinfo ENOTFOUND testing.localhost'
docker-calcom-1 | @calcom/web:start: }
You can deploy Cal.diy on Railway. The team at Railway also have a detailed blog post on deploying on their platform.
You can deploy Cal.diy on Northflank. The team at Northflank also have a detailed blog post on deploying on their platform.
Currently Vercel Pro Plan is required to be able to Deploy this application with Vercel, due to limitations on the number of serverless functions on the free plan.
Cal.diy is fully open source, licensed under the MIT License.
Unlike Cal.com's "Open Core" model, Cal.diy has no commercial/enterprise code. The entire codebase is available under the same open-source license.
.../auth/calendar.events, .../auth/calendar.readonly and select Update.<Cal.diy URL>/api/integrations/googlecalendar/callback and <Cal.diy URL>/api/auth/callback/google replacing Cal.diy URL with the URI at which your application runs..env file as the value for GOOGLE_API_CREDENTIALS key.After adding Google credentials, you can now Google Calendar App to the app store. You can repopulate the App store by running
cd packages/prisma
yarn seed-app-store
You will need to complete a few more steps to activate Google Calendar App. Make sure to complete section "Obtaining the Google API Credentials". After that do the following
<Cal.diy URL>/api/auth/callback/google<Cal.diy URL>/api/integrations/office365calendar/callback replacing Cal.diy URL with the URI at which your application runs..env file into the ZOOM_CLIENT_ID and ZOOM_CLIENT_SECRET fields.<Cal.diy URL>/api/integrations/zoomvideo/callback replacing Cal.diy URL with the URI at which your application runs.meeting:write:meeting.user:read:settings..env file into the DAILY_API_KEY field in your .env file.DAILY_SCALE_PLAN variable to true in order to use features like video recording.<Cal.diy URL>/api/integrations/basecamp3/callback replacing Cal.diy URL with the URI at which your application runs.BASECAMP3_CLIENT_ID and BASECAMP3_CLIENT_SECRET fields.BASECAMP3_CLIENT_SECRET env variable to {your_domain} ({support_email})..env file into the HUBSPOT_CLIENT_ID and HUBSPOT_CLIENT_SECRET fields.<Cal.diy URL>/api/integrations/hubspot/callback replacing Cal.diy URL with the URI at which your application runs.crm.objects.contacts and crm.lists..env file into the ZOHOCRM_CLIENT_ID and ZOHOCRM_CLIENT_SECRET fields.<Cal.diy URL>/api/integrations/zohocrm/callback replacing Cal.diy URL with the URI at which your application runs.Cal.diy uses Unkey for rate limiting. This is an optional feature and is not required for self-hosting.
If you want to enable rate limiting:
ratelimit.create_namespace and ratelimit.limit.env file into the UNKEY_ROOT_KEY fieldNote: If you don't configure Unkey, Cal.diy will work normally without rate limiting enabled.
We welcome contributions! Whether it's fixing a typo, improving documentation, or building new features, your help makes Cal.diy better.
Important: Cal.diy is a community fork. Contributions to this repo do not flow to Cal.com's production platform. See CONTRIBUTING.md for details.
Even small improvements matter — thank you for helping us grow!
We have a list of help wanted that contain small features and bugs which have a relatively limited scope. This is a great place to get started, gain experience, and get familiar with our contribution process.
Don't code but still want to contribute? Join our Discussions and help translate Cal.diy into your language.
Cal.diy is built on the foundation created by Cal.com and the many contributors to the original project. Special thanks to: