When deploying and configuring Linux as part of my Terraform plans, I have generally used a combination of script based or in-line Remote-Exec declarations to get the job done. A while back, while working on my Ansible CentOS Terraform deployment hack, I came across the following error when the executable shell script got to the point where I needed to either log-out or reboot to close off the existing session.

The reboot was required to set the correct Python version for the user session in Bash.

Effectively, because Terraform didn’t receive a correct exit status or signal it threw the error and stopped the plan. After digging around and discovering that Terraform can’t send a reboot command its self to a vSphere VM (Unless i’ve missed the command) I came across the Failure Behavior for Provisioners.

Failure Behavior

By default, provisioners that fail will also cause the Terraform apply itself to fail. The on_failure setting can be used to change this. The allowed values are:

  • “continue” – Ignore the error and continue with creation or destruction.
  • “fail” – Raise an error and stop applying (the default behavior). If this is a creation provisioner, taint the resource.

So the Quick Fix is after the remote-exec provisioner declaration, to simply add in:

on_failure = “continue”

This will allow Terraform to continue to the next declaration and continue to proceed with executing the plan. The default value is to fail, so there isn’t really any need to define a fail on the failure… as it will exit as per normal. Again, it would be nice is Terraform added a feature where the vSphere Provider can reboot/reset the VM as a declarable action.

Resources:

https://www.terraform.io/docs/provisioners/index.html#failure-behavior