Ubuntu 18.04 / 20.04 EOL: migration playbook
Ubuntu 18.04 is past EOL; 20.04 standard support ended in April 2025. Plan the OS migration before the apt repos go cold and the kernel CVEs stack up.
Ubuntu LTS releases get five years of standard support and another five of paid Extended Security Maintenance. 18.04 ended standard support in April 2023; ESM runs until April 2028. 20.04 ended standard support in April 2025; ESM runs to April 2030.
ESM is a real option, but it is not a strategy. The longer a server runs on an EOL OS, the more the rest of the stack drifts away from anything supported on it — newer PHP, MySQL, and Node versions stop publishing packages for the old apt repositories, and the kernel CVE backlog grows. The migration is cheaper in 2026 than in 2028.
1. Confirm what you are on
A surprising number of production servers are running an OS version nobody on the current team has actually checked.
lsb_release -a— the versionuname -r— the kernelpro status(Ubuntu Pro / ESM) — whether ESM is active and what services are coveredcat /etc/os-release— for older systems that may pre-datelsb_release
If the server is on 16.04 or earlier, you are not migrating; you are doing an emergency cutover. The advice below still applies, but the timeline shortens.
2. Inventory anything pinned to the OS
The OS upgrade is rarely about the OS. It is about everything that was installed against it.
- Runtime versions. What PHP, Python, Node, Ruby, and Java versions are in use? Where did they come from — the distribution apt repo, a PPA (Ondřej for PHP, deadsnakes for Python), or a manual install? The PPA is usually the answer.
- Web server. Apache, Nginx, or Caddy version. The config syntax has not changed dramatically across recent Ubuntu versions, but module availability has.
- systemd unit files. Any unit files in
/etc/systemd/system/or service overrides. These usually move cleanly, but assumptions about user accounts and paths sometimes do not. - apt sources.
cat /etc/apt/sources.list /etc/apt/sources.list.d/*— every third-party repo. Each one needs a 20.04/22.04/24.04 equivalent before you can move. - Locale and timezone.
localectlandtimedatectl. Easy to forget; surprisingly disruptive when wrong. - Cron jobs and absolute paths. Anything in
/etc/cron.d,/etc/cron.daily, or user crontabs. Paths that reference the OS-specific binary location can quietly break — Ubuntu has moved a few binaries between releases. - fail2ban, logrotate, and similar config. These rules sometimes reference syslog formats or service names that have changed.
3. Decide: in-place upgrade vs new server cutover
Two options, in order of how much we recommend them.
- New server cutover (recommended). Provision a fresh 22.04 or 24.04 server, install the stack from scratch, sync data, switch DNS. Rollback is a DNS change. The new server is configurable, repeatable, and the old server is intact as a recovery option.
- In-place
do-release-upgrade. Faster but less reversible. The upgrade modifies the running system; rollback is a backup restore, not a DNS flip. Acceptable on systems where the alternative is no upgrade at all, and where the data layer lives on a separate machine.
If you cannot afford a parallel server for two weeks, the in-place upgrade is the right call. If you can, do not. The cleanup after a successful in-place upgrade always takes longer than the cutover would have.
4. Stand up the new server in parallel
For the cutover path:
- Provision the new server. Pick the LTS that gives you the longest runway — currently 24.04, with standard support until April 2029.
- Install the runtime stack from the same PPAs or upstream sources you used on the old server. Match versions deliberately; this is not the time to upgrade PHP and Ubuntu in the same change.
- Copy the application code, environment files, and any configuration that lives outside the repo.
- Sync the database. For MySQL, replication is the cleanest path — see MySQL 5.7 EOL for the replication-first approach when an MySQL upgrade is bundled with the OS migration.
- Sync any uploaded files with
rsync. Run an initial pass days before cutover; an incremental pass minutes before. - Run the application’s smoke tests against the new server using a hosts-file override. Do not let real traffic hit it yet.
The point of running both servers in parallel is that the new one becomes increasingly real over the course of a week. By cutover day, the only difference between the two is which one DNS points at.
5. Cutover plan
A clean cutover is a checklist, not a heroic act.
- Drop DNS TTL to 60 seconds at least 24 hours before the cutover.
- Run a final
rsyncof any file storage. Run a final database sync. - Put the application into maintenance mode if there is a write-heavy step.
- Update DNS. Verify with
dig +shortfrom at least two locations. - Bring the application out of maintenance mode on the new server.
- Run the post-cutover smoke tests.
- Restore DNS TTL to its prior value once you are confident.
Timeline: from maintenance-mode-on to maintenance-mode-off, this should be five to fifteen minutes for most systems. If the data sync is the long pole, see if the application’s writes can be replayed from logs after cutover instead of synced before.
6. Post-cutover
The cutover is not the end of the migration. The week after is.
- Leave the old server running, in read-only mode, for at least seven days. It is the rollback path. After seven days of clean traffic on the new server, retire it.
- Watch the logs on the new server for paths that 404, file uploads that fail, or background jobs that error out. These are usually the result of paths or permissions that did not survive the move.
- Verify monitoring is wired up. The new server’s metrics should be visible in the same dashboards as the old one. If they are not, fix that before you forget.
- Document the migration. The runbook is what makes the next OS migration a routine exercise instead of an emergency.
Common gotchas
A few things that quietly bite during the cutover week:
- Cron paths.
/usr/bin/phpon 22.04 may point at a different version than on 18.04 if PPAs are configured differently. Use absolute paths to specific versions in cron, not the system default. - fail2ban regex changes. Default jail regexes can fail silently against newer log formats. Run
fail2ban-regexagainst a real log line on the new server. - Locale. A missing locale (often
en_US.UTF-8on minimal images) will cause subtle string-handling bugs in PHP and Python.locale -ashould list it; if not,locale-genand update/etc/default/locale. - Timezone drift. Application servers in Europe with a default UTC system clock are common; check that database timestamps match application expectations after cutover.
- MTAs and outbound mail. Postfix, sendmail, or an SMTP relay configuration is often the last thing that gets noticed. Test outbound email before declaring the migration done.
The broader playbook
OS migration is one third of the modernization triangle: language runtime, database, and operating system. The same discipline applies to all three — audit first, parallelize where possible, prove rollback before you need it. The same shape underneath legacy PHP modernization covers all three pieces, and the PHP EOL checklist is the language-runtime sibling to this one.
If the system has all three pieces drifting at once — EOL OS, EOL runtime, EOL database — do not try to upgrade them in one change. The combinatorics of breakage make the diagnostic work impossible. Sequence them, with an audit up front to decide the order.