
Disaster Averted: How I Recovered My Blog from a MySQL Meltdown
There’s nothing like a good old-fashioned data disaster to remind you of the importance of backups. Recently, ChaseTheHare.com — went dark after a Docker container failed to load during a server migration.
Table Of Content
- 🖥️ What Went Wrong: The Migration Trigger
- 📂 The Diagnosis: Raw .ibd Files, No Schema
- 🧩 Recovery Plan
- 🧱 Step 1: Extracting SQL from .ibd Files
- 🔧 Step 2: Rebuilding the MySQL Database
- 🚧 Common Errors (And Fixes)
- ❌ ERROR 1067: Invalid default value for datetime
- ❌ ERROR 1366: Incorrect string value (emoji)
- ✅ Result: Blog Resurrected
- 🧱 My New Backup Strategy (Daily, Dockerised & Server-Wide) 💾
- Notes
- 🔚 Final Thoughts
Years of articles, images, embedded Strava maps, and paddleboard adventures — seemingly gone in an instant. But thanks to some late-night debugging, stubborn determination, and powerful open-source tools, I got it all back.
This is the story of how I recovered my corrupted WordPress MySQL database using raw .ibd
files — and why backup hygiene matters.
🖥️ What Went Wrong: The Migration Trigger
I’m in the process of moving servers, running on Hetzner AX41-NVMe dedicated root servers. During this migration, I stopped the Docker containers on my original server, copied everything over to the new one, and brought the containers back up.
That’s when I noticed MySQL was refusing to start on the new box.
No problem, I thought — I’ll just bring it back up on the old server. But it was already too late.
The database container now failed to start on both servers, throwing corruption errors. The data volume was broken.
Panic mode: engaged. 😱
📂 The Diagnosis: Raw .ibd
Files, No Schema
When I inspected the MySQL data directory (/var/lib/mysql/
), the schema was missing, but dozens of .ibd
files were still there — containing the table data. No .frm
or CREATE TABLE
structure files survived.
That meant:
No access via
mysqldump
No way to boot WordPress
No admin login, no phpMyAdmin
A very bad day
🧩 Recovery Plan
I settled on a recovery approach using ibd2sql, which can extract data from .ibd
files even when the rest of MySQL’s metadata is missing.
The plan was simple (ish):
Use
ibd2sql
to convert.ibd
→.sql
Spin up a new MySQL container
Recreate the database schema and import data
Bring WordPress back to life
Celebrate with beer 🥳
🧱 Step 1: Extracting SQL from .ibd
Files
Using ibd2sql
, I wrote a Python script to loop through all .ibd
files and export their content to SQL:
python3 main.py /path/to/file.ibd --sql --ddl > wp_posts.sql
Using the above basis I generated the following bash script.
#!/bin/bash
INPUT_DIR="./db/chasetheharev2"
OUTPUT_DIR="./recovered_sql"
mkdir -p "$OUTPUT_DIR"
for ibd_file in "$INPUT_DIR"/*.ibd; do
table_name=$(basename "$ibd_file" .ibd)
output_file="$OUTPUT_DIR/${table_name}.sql"
echo "🔄 Converting: $table_name"
python3 main.py "$ibd_file" --sql --ddl --table "$table_name" > "$output_file"
if [[ $? -eq 0 ]]; then
echo "✅ Saved to: $output_file"
else
echo "❌ Failed: $table_name"
fi
echo "-------------------------------------------"
done
echo "🎉 Done. All SQL dumps are in $OUTPUT_DIR"
The output: a full set of .sql
files in a recovered_sql/
folder, ready for import.
🔧 Step 2: Rebuilding the MySQL Database
I then spun up a brand new MySQL 8 container with a clean volume, using the same Docker Compose stack — just mapped to a different data directory.
To avoid the “unknown database” errors, I scripted the database creation:
CREATE DATABASE chasetheharev2 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
Then I built a robust import script that:
Loops through each
.sql
fileImports it into the fresh DB inside the Docker container
Logs success or failure
Handles emoji encoding, malformed datetime values, and table key mismatches
#!/bin/bash
DB_NAME=""
DB_USER=""
DB_PASS=""
SQL_DIR="./ibd2sql/recovered_sql"
CONTAINER_NAME="
LOG_FILE="import_log_$(date '+%Y%m%d_%H%M%S').log"
# === LOGGING HEADER ===
{
echo "📝 Starting SQL import into '$DB_NAME'"
echo "📂 Source folder: $SQL_DIR"
echo "🚢 Using container: $CONTAINER_NAME"
echo "🔁 Import started at: $(date)"
echo ""
} | tee -a "$LOG_FILE"
# === DROP & RECREATE DATABASE ===
echo "🧨 Dropping and recreating database '$DB_NAME' inside Docker..." | tee -a "$LOG_FILE"
docker exec -i $CONTAINER_NAME mysql -u"$DB_USER" -p"$DB_PASS" -e "DROP DATABASE IF EXISTS \`$DB_NAME\`;" 2>>"$LOG_FILE"
docker exec -i $CONTAINER_NAME mysql -u"$DB_USER" -p"$DB_PASS" -e "CREATE DATABASE \`$DB_NAME\` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;" 2>>"$LOG_FILE"
echo "✅ Fresh database created." | tee -a "$LOG_FILE"
echo "" | tee -a "$LOG_FILE"
# === IMPORT EACH SQL FILE ===
echo "🚀 Starting full import of all SQL files in $SQL_DIR..." | tee -a "$LOG_FILE"
echo "" | tee -a "$LOG_FILE"
for sql_file in "$SQL_DIR"/*.sql; do
filename=$(basename "$sql_file")
echo "📥 Importing: $filename" | tee -a "$LOG_FILE"
if docker exec -i $CONTAINER_NAME mysql -u"$DB_USER" -p"$DB_PASS" "$DB_NAME" < "$sql_file" 2>>"$LOG_FILE"; then
echo "✅ Imported: $filename" | tee -a "$LOG_FILE"
else
echo "❌ Failed to import: $filename" | tee -a "$LOG_FILE"
fi
echo "-------------------------------------------" | tee -a "$LOG_FILE"
done
echo "🎉 All imports attempted at $(date)." | tee -a "$LOG_FILE"
🚧 Common Errors (And Fixes)
I encountered a fair number of edge cases during import:
❌ ERROR 1067: Invalid default value for datetime
✅ Fix: Replace DEFAULT '0000-00-00 00:00:00'
with DEFAULT CURRENT_TIMESTAMP
or remove the default entirely
❌ ERROR 1089: Incorrect prefix key
✅ Fix: Avoid using indexed key lengths on non-string fields, e.g.:
❌ ERROR 1366: Incorrect string value (emoji)
✅ Fix: Ensure your DB uses utf8mb4
and your collation is utf8mb4_unicode_ci
or utf8mb4_unicode_520_ci
KEY `slug` (`slug`(254)) → KEY `slug` (`slug`)
✅ Result: Blog Resurrected
After a successful import and a few manual adjustments, I pointed WordPress at the restored DB, cleared the cache, and held my breath.
Boom — blog back online. Every post, comment, and featured image had survived.
🧱 My New Backup Strategy (Daily, Dockerised & Server-Wide) 💾
With the dust settled and the blog back online, I knew one thing had to change: no more playing roulette with .ibd
files.
I’ve now implemented an automated backup strategy across my server using tiredofit/db-backup
— a brilliant Dockerised backup container that supports MySQL, MariaDB, Postgres, and more.
I’m using it not just for my blog’s database, but across all containers on the server that need regular backups. This keeps things consistent, tidy, and automated.
🔁 My chosen settings:
Back up every 24 hours
Keep 3 days’ worth of backups
Use ZSTD compression to save space
Store backups in a central
./backups
volumeRestore-ready
.sql
and optional checksums
📦 Example docker-compose
snippet for one DB container:
services:
db-backup:
image: tiredofit/db-backup
container_name: db-backup
volumes:
- ./backups:/backup
environment:
- TIMEZONE=Europe/London
- CONTAINER_NAME=db-backup
- CONTAINER_ENABLE_MONITORING=FALSE
# === GLOBAL SETTINGS ===
- DEFAULT_BACKUP_INTERVAL=1440 # Backup every 24 hours
- DEFAULT_CLEANUP_TIME=4320 # Retain backups for 3 days (4320 mins)
- DEFAULT_COMPRESSION=ZSTD # Use efficient ZSTD compression
- DEFAULT_CHECKSUM=SHA1 # SHA1 checksums (optional but useful)
# === DB01: Blog DB ===
- DB01_TYPE=mariadb
- DB01_HOST=db_blog
- DB01_NAME=chasethehare_v2
- DB01_USER=youruser
- DB01_PASS=yourpassword
# === DB02: App Database ===
- DB02_TYPE=postgres
- DB02_HOST=app-db
- DB02_NAME=appdata
- DB02_USER=appuser
- DB02_PASS=anotherpassword
# === DB03: Analytics Database ===
- DB03_TYPE=mariadb
- DB03_HOST=analytics-db
- DB03_NAME=matomo
- DB03_USER=matomo_user
- DB03_PASS=matomo_pass
restart: always
networks:
- your-network-name
networks:
your-network-name:
external: true
Notes:
Make sure all database containers (
db_blog
,app-db
,analytics-db
, etc.) are reachable by name on the same Docker network.You can define as many databases as needed using
DB04_
,DB05_
, etc.If a specific database needs different interval/cleanup settings, you can override them per DB:
- DB03_BACKUP_INTERVAL=60 # Every hour
- DB03_CLEANUP_TIME=1440 # Keep for 1 day
🔚 Final Thoughts
Losing access to your database — especially when it contains years of content — is a horrible feeling. But with the right tools and approach, recovery is possible.
The combo of:
ibd2sql
for recoveryscript-driven
.sql
importsand a container-native backup routine
…has now given me a setup that’s both resilient and easily restorable.
Don’t wait for the next panic — back it up now. 😉
Please share this article if you like it!
I’m a fitness enthusiast and Peloton addict who loves challenging limits through races, paddleboarding, and life’s adventures. Here, I share milestones, reflections on Acoustic Neuroma, and stories of resilience and growth.
No Comment! Be the first one.