TL;DR: this is a bit ranty, and is not necessarily well organized. You will be surprised that I have not put that text through an AI agent to review and rework it. You have been warned: you are reading something that was entirely written by a damn human being.

This blog post contains an important announcement that you should still read, though.

# So long, and goodnight GitHub

I had anticipated writing such a blog article for a long time by now (months? years?). Back in June 2024, I wrote a blog post about switching to an email-based workflow, and the will to switch to a decentralized life. I’ve been having several issues with many different services I use, including GitHub. The idea that Microsoft bought it started to itch me, for the simple reason that it was obvious what microsoft had been doing all along.

# You are the product

Remember this?

If you happen to be using freely a service owned by a company, you are very likely the product.

So back in the days, I decided to start moving my main repositories — i.e. the ones I actively maintain — as soon as possible, so that the new work process move to platforms I believe more in (SourceHut), and kept archives of the existing version of the projects on GitHub.

As time passed, my opinion on how I should run my spare-time projects matured. We’ve seen the advent of so-called « AI », which is a term I despise. It means absolutely nothing and is just a marketing buzzword, but also a massive and global engineering desillusion that is going to cause tons of harm mid- and long- terms. What do you think the developers of tomorrow will look like, not having a single clue about what a stack corruption means, or tail-recursion, or even simply how to think by themselves, being creative, where all they will have learned is to reach for their Claude prompt or Copilot 3000. AI models were supposed to be programmers assistants, not the other way around.

People won’t really thinking about how to solve a problem. They will ask the AI to solve the problem for them, and have to validate the solution. This is currently possible because most of the programmers out there actually have the required skills to understand and check the code (most of the time terrible) that is spit out by Cursor. At some point, those programmers will start to actually get taught by AI, lacking the foundational skills to verify the code. I strongly believe that not coming up with the actual implementation is going to create a generation of developers that will lack the creativity, resilience and methodologies required to be a good programmer, but also to just being able to read and understand it in the first place.

This is just one side of the issue.

I will not even mention the problem of energy, it’s well documented (i.e. AI sucks hard), and instead want to talk about two things.

# Copyleft violation

The AI ecosystem gives little regard to copylefts and actually contributing in a collaborative way with the rest of the world. I have shared my thoughts on that several times, but it’s a global violation to millions of codebases (ripping off free and open-source code in order to train a company-owned, privileged, technology that users have to pay for, one way or another), while evaporating lakes. I am not okay with my code being used to train proprietary software, code that is part of profitable services that no one in the free and open-source world can possibly can even dream to replicate due to obvious… budget differences. People used to stay that companies using free and open-source code should provide a bit of money to the people actually maintaining that software on their spare-time; it’s code they rely on, that they didn’t have to write; that they don’t have to maintain. And yet, now, most people seem unbothered with the idea of companies absorbing the whole Internet to train their services that they will charge you for. WTF?!

# Aggressive crawlers

The AI ecosystem is so aggressive that it causes trustworthy third parties to experience DDoS and other various kinds of issues, which is absolutely disgusting. That led to people coming up with solutions like AI Labyrinth or human-detection algorithm when accessing forges. That is not a solution, because such platforms actually have to spend CPU resources to fight against aggressive AI crawlers. If you are implementing a crawler that doesn’t respect robots.txt or similar, you are part of the problem.

# A clear evidence

And then, there was this tweet from Thomas Dohmke — former GitHub CEO:

The evidence is clear: Either you embrace AI, or get out of this career.

This is like spitting directly to the faces of millions of developers worldwide. Of course, coming from a guy whose career depends on people actually following and paying for his AI development model, this is at least hilarious, yet disrespectful for people who, actually, do build software instead of babysitting a drunk completer that understands nothing about your project and keeps wasting everyone’s energy asking to review his triangle drawing while it actually drew a circle.

That so-called evidence is based on 22 engineers who use AI in their day to day work. 22. Twenty-two. Vingt-deux. If that is evidence for him, and that we should trust his judgement, then I officially announce that I believe FSM will be president of the world by 2050 — and I am almost 100% sure you can find more people believing my claim than his.

I do believe that we need to take a bit of hindsight on technology, and the place where we want to stand. From my perspective, I have read that AI makes us dumb. I do think that at some point, where we will need to come up with ideas to new problems, AI will come short. I — probably like many others — had to deal with Cursor comments at work on PRs I opened for my colleagues to review, just to realize that the Cursor AI left a dumb comment about my code, proving that 1. it doesn’t understand the project and 2. even the suggestion it does is wrong, without really looking into the context of the project. This is just a way to make us lose and waste time at work, but we are using AI.

I do think AI agents are tools, and my opinion is that they are not mature tools (yet?) that I do not want to have to interact with (yet?). Woah, I don’t use AI to review the emails I send! Damn, I don’t use AI to write my blog articles! Holy shit, I don’t try to ask an AI what it thinks about the code I write! And you know what? Why would I? Why would I ask an AI to write a speech to describe a project I work on instead of actually being a human being and sitting at my desk, and activiting the actual and real neurons I still have in my brain? We have been having AI for a long time: a good compiler is a form of AI. But we have always used AI dedicated to solving a very specific problem. What those new AI agents are doing is trying to replace human thinking, with hyper advanced auto-completer. Yeah, I’ll pass.

# Announcement

I started last year to move my projects off GitHub to SourceHut. I want to use this blog article to officially say that I have started to move the rest of my projects off GitHub, and that I will not let archived repositories on GitHub. Of course, I am not doing that without first checking package managers and any dependent systems (I’m thinking Hackage for Haskell packages, and crates.io for Rust). So it will take a bit of time — I have actually quite a lot of source projects! — but I will eventually reach a point where none of the repositories I have on GitHub will be source projects.

If you depend on any of my GitHub projects, feel free to switch to the SourceHut tree instead.

As always, keep the vibes, and goodbye GitHub.


↑ Goodbye, GitHub
github, sourcehut
Tue Aug 19 15:15:00 2025 UTC