Michael Dyrynda
Home Blog Podcasts
Altering large tables on Laravel Vapor August 3rd, 2021
If you've ever fallen victim to missing scheduled tasks and, of course, not finding out about it till hours, days, or even weeks later, check out thenping.me where you can get started with hands-free scheduled task monitoring for your Laravel applications from just $9/month.

I recently stumbled upon a peculiar issue with Laravel Vapor where I was getting collisions with index names during database migrations on deploy and wanted to share how to work through this.

Failed database migration on deploy

The problem

My first instinct was "oh, I was testing this migration out against the database directly to make sure it was doing what I needed it to, so I need to first drop the index and try again"

The trouble was that it didn't fix the problem.

As it turns out, Vapor deploys are subject to the same timeout limitations that your CLI lambda is bound by. This means for operations on large tables (or any long-running deploy hook), a timeout is reached and the deploy will fail.

The migration, however, continued to run and completed against the database, so the index already existed on a subsequent attempt, resulting in the same error.

The solution

First, you'll want to get the migration you need to run, assuming it hasn't already been run locally.

1php artisan migrate --pretend

This will generate the queries Laravel would run, along with the migration key, which we'll need to prevent subsequent deploys from attempting the sane migration again.

12021_08_03_021522_add_task_dashboard_metric_index: alter table `executions` add index `task_dashboard_execution_metrics`(`task_id`, `runtime`, `memory`, `created_at`)
22021_08_03_021522_add_task_dashboard_metric_index: alter table `executions` drop index `executions_task_id_created_at_index`

Now, connect to your target (production) database server (using the command line, vapor database:shell, TablePlus, etc.) and identify the next batch value for your migrations table:

2 max(`batch`) + 1 as `new_batch`
4 `migrations`; -- 2

Then, run the database migrations generated by the migrate --pretend command.

Lastly, you'll need to add an entry to the migrations table for each migration key to prevent future calls to migrate from running the sane migration again.

1insert into `migrations` (`migration`, `batch`) values
2 ('2021_08_03_021522_add_task_dashboard_metric_index', 2); -- The value from the `new_batch` query above

Now your Vapor deploy should proceed as expected, given the migrate command will see that there are no migrations to be executed.


Now, in this instance I was adding an index to an existing (large) table, so this approach made sense.

If you are carrying out destructive operations against an active deployment, you will need to be considerate of your approach to ensure your application doesn't start raising errors mid-change.

You may need to use a multi-step add, deploy, remove approach in these scenarios.


Whilst this is a manual process, the outcome is the same as running artisan migrate as part of your deploy process.

Running and recording migrations manually is going to be fairly standard as your application database continues to grow, allowing for greater flexibility to tackle longer-running migrations that would otherwise timeout running via Lambda with PHP

As an aside, the index I had to add took a page from... reaching an execution timeout to loading in < 1sec, so I'm pretty happy with that covering index 👍🏻

I'm a real developer ™
Michael Dyrynda


I am a software developer specialising in PHP and the Laravel Framework, and a freelancer, blogger, and podcaster by night.

Proudly hosted with Vultr

Syntax highlighting by Torchlight