Why Easy Self-Hosted Solutions Are Sabotaging Your Application's Success
Discover the hidden pitfalls of easy self-hosted solutions. While they promise simplicity, they often lead to unforeseen challenges and complications. Learn why traditional deployment methods might save you time and headaches in the long run.
Perhaps you’re familiar with the dilemma: you want to host your application without incurring the ever-rising costs of the cloud providers.
In many cases, a self-hosted solutions like Coolify seems effortless, just a click away from getting your application live and ready to go!
But then somehow, you can't shake that feeling... this is just feels a bit too easy
I had just spun up a VPS at Hetzner and installed Coolify on the instance. Everything was going really well, and it was easy to set up. The seamless way of deploying an application by just connecting to my GitHub repository and pointing to a Dockerfile was amazing!
I even created a PostgreSQL database with more or less a single click. The database came with an option to create an SQL backup job to an S3 storage, which I set up in a couple of minutes.
When hosting the application, Coolify handled everything—from deployment and continuous integration to managing the ingress to my application, handling certificates, and routes.
But while I was clicking away in the web application, deploying my application, and setting up backups, a lot of configurations were made in the backend. Traefik proxy configurations, Docker Compose files, SQL backups, and proprietary files for backups—all things that had very little to no documentation on how they worked. I felt a bit uneasy and sketched out that I couldn't find any documentation on many things.
Then everything went wrong.
After having some applications hosted on the Coolify instance for a couple of weeks, I was about to change the domain for all my infrastructure services.
Now, this shouldn’t be the most difficult thing to do in the world. In my head, it should involve changing some configurations on which domain to listen to, then updating some DNS records, and maybe updating a certificate if this is not an automated process.
As a good developer, I went to the documentation to see if I could find anything on how such an operation should be done, but, as with everything else, I couldn't find anything useful. And like all other developers when they can't find something in the docs, I went to Google. Still no luck.
So, I thought, I'll have to try and see what happens. I made sure the database was backed up, and even made sure that the VPS itself was backed up, and then I went for it. I changed the instance configurations to look for the new domain, and afterward, I changed the DNS.
This domain update crashed the instance. Not my applications, just the instance itself. But here is where the problem appears: when working with these clickable GUIs, suddenly you have a problem that needs a terminal to fix, but you have no idea where to start or how to fix the problem.
I spent the next couple of hours scouring the filesystem on my VPS in search of any configuration or YAML file that resembled some sort of Coolify configuration file, but with no luck. Everything I tried did not make the instance available again.
I then figured I would roll back the VPS to at least have access to the instance. So, I rolled back to the latest backup, a couple of hours old.
This allowed me to access the instance login screen over HTTP, but now I was no longer able to log in with my account. Again, with nothing to help me troubleshoot the problem, I figured I needed to abandon ship and move the application away. Thank God for the SQL backup job.
But, of course, this should not be as simple as I had hoped.
The backup files were in a format unknown to any pg_restore
command or pgAdmin tool. My best guess is that the backup files were in a special format for storing PostgreSQL backup files. Again, I spent the next couple of hours troubleshooting how I could get my backup of my database, and at this time I was getting a bit nervous.
I managed to SSH into the server and use Docker to connect to the PostgreSQL container and run a pg_dump
command, which gave me a backup of the database. I could then use SCP to download the file from the server and restore the application with the database on a different VPS.
Finally, after 8 hours of up-hill battle, I managed et my backup and my data was saved.
In the end, this entire event took me eight hours of frustration and troubleshooting just to recover my application, which took three clicks to deploy. Was it worth it?
I was lured into placing my application in production on the Coolify instance, and I even migrated my database to the instance in hopes of saving some money. I was lucky that this was a fairly small application with a small database, which allowed me to have minimal downtime. But if it had been a large application with a large database, I could have been in a whole different situation.
I hope that everyone reading this story, will think about what use these tools for and maybe use a different strategy than I used. And my best advice is to keep your database on a separate system or cloud provider.
Now, I don't say that you shouldn't use these types of tools because they have a great intention. And when they work, they make your life 10x easier. But how much easier is it than just deploying a couple of Docker Compose files with Nginx on a VPS yourself?
What did i work on this week?
In this week i have worked on creating a simple workflow for deploying, maintaining and updating a full-stack web application to a VPS for as cheap as possible.
So my thought was to use stabile tools such as docker and maybe ansible for automating as much as possible. But in the process i wrote a technical article on how you can do all the core stuff manually and host your full-stack web application on a VPS for under 5$ / month
Beside this i am i am working on a SaaS product i am expecting to go live with on the 1'st of September.