Only one member of a 3 host cluster launching VMs with all cpu flags when 'migration.stateful=true'

I have a 3 host incus cluster, all hosts running incus 6.6 on debian 12, identical hardware across the board. I set migration.stateful=true on my profiles I use for virtual machines and after rebooting the VMs it seems incus is only launching qemu processes with all the CPU flags set on one of the hosts in the cluster, the other two hosts are only launching VMs with -cpu kvm64.

I tried setting instances.vm.cpu.x86_64.baseline=kvm64 instances.vm.cpu.x86_64.flags=auto on the default cluster group as outlined in the 6.4 release announcement but it seems only instances.vm.cpu.x86_64.baseline=kvm64 is being populated. I’m able to work around this by manually populating all the computed flags from the working server in instances.vm.cpu.x86_64.flags.

I’m not sure if this is a bug, intended behavior, or if there’s something I’m doing wrong.

root@server01:~# incus launch images:ubuntu/24.04 test1 --vm --target server01 -c migration.stateful=true
Launching test1
root@server01:~# incus launch images:ubuntu/24.04 test2 --vm --target server02 -c migration.stateful=true
Launching test2
root@server01:~# incus launch images:ubuntu/24.04 test3 --vm --target server03 -c migration.stateful=true
Launching test3
root@server01:~# incus launch images:ubuntu/24.04 test4 --vm --target server04 -c migration.stateful=true
Launching test4

root@server01:~# incus list test
+-------+---------+-----------------------+-------------------------------------------------+-----------------+-----------+----------+
| NAME  |  STATE  |         IPV4          |                      IPV6                       |      TYPE       | SNAPSHOTS | LOCATION |
+-------+---------+-----------------------+-------------------------------------------------+-----------------+-----------+----------+
| test1 | RUNNING | 10.104.61.12 (enp5s0) | fd42:73ae:9013:c530:216:3eff:fe06:f901 (enp5s0) | VIRTUAL-MACHINE | 0         | server01 |
+-------+---------+-----------------------+-------------------------------------------------+-----------------+-----------+----------+
| test2 | RUNNING | 10.104.61.13 (enp5s0) | fd42:73ae:9013:c530:216:3eff:fe8e:e88e (enp5s0) | VIRTUAL-MACHINE | 0         | server02 |
+-------+---------+-----------------------+-------------------------------------------------+-----------------+-----------+----------+
| test3 | RUNNING | 10.104.61.14 (enp5s0) | fd42:73ae:9013:c530:216:3eff:fe93:249a (enp5s0) | VIRTUAL-MACHINE | 0         | server03 |
+-------+---------+-----------------------+-------------------------------------------------+-----------------+-----------+----------+
| test4 | RUNNING | 10.104.61.15 (enp5s0) | fd42:73ae:9013:c530:216:3eff:fe14:1191 (enp5s0) | VIRTUAL-MACHINE | 0         | server04 |
+-------+---------+-----------------------+-------------------------------------------------+-----------------+-----------+----------+

root@server01:~# incus exec test1 -- grep "model name" /proc/cpuinfo
model name	: Common KVM processor
root@server01:~# incus exec test2 -- grep "model name" /proc/cpuinfo
model name	: Common KVM processor
root@server01:~# incus exec test3 -- grep "model name" /proc/cpuinfo
model name	: Common KVM processor
root@server01:~# incus exec test4 -- grep "model name" /proc/cpuinfo
model name	: Common KVM processor

root@server01:~# incus stop test1 test2 test3 test4
root@server01:~# incus cluster group set default instances.vm.cpu.x86_64.baseline=host
root@server01:~# incus start test1 test2 test3 test4

root@server01:~# incus exec test1 -- grep "model name" /proc/cpuinfo
model name	: AMD EPYC 7402P 24-Core Processor
root@server01:~# incus exec test2 -- grep "model name" /proc/cpuinfo
model name	: AMD EPYC 7402P 24-Core Processor
root@server01:~# incus exec test3 -- grep "model name" /proc/cpuinfo
model name	: AMD EPYC 7402P 24-Core Processor
root@server01:~# incus exec test4 -- grep "model name" /proc/cpuinfo
model name	: AMD EPYC 7402P 24-Core Processor
root@server01:~# 

Thanks for the response and example. I guess the specific question I have is if the following is expected behavior with a cluster group with default configuration:

david@foobar ~ $ incus cluster group show homecluster:default
description: Default cluster group
members:
- incus1
- incus2
- incus3
config: {}
name: default
david@foobar ~ $ for x in {1..3}; do incus launch images:debian/12/cloud homecluster:test${x} --vm --target incus${x} -c migration.stateful=true; done
Launching test1
Launching test2
Launching test3
david@foobar ~ $ for x in {1..3}; do incus exec homecluster:test${x} -- egrep "model name|flags" /proc/cpuinfo; done
model name      : Common KVM processor
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor cpuid_fault pti
model name      : Common KVM processor
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm serialize
vmx flags       : tsc_offset vtpr
model name      : Common KVM processor
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor cpuid_fault pti
david@foobar ~ $ for x in incus{1..3}; do ssh ${x} 'ps awfux | grep qemu | grep test | grep -v grep'; done
incus      13141 27.1  1.2 2498996 1216612 ?     Sl   10:58   0:30 /opt/incus/bin/qemu-system-x86_64 -S -name test1 -uuid c3fef556-cae8-484e-a364-39e6e5d07c63 -daemonize -cpu kvm64 -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /run/incus/test1/qemu.conf -spice unix=on,disable-ticketing=on,addr=/run/incus/test1/qemu.spice -pidfile /run/incus/test1/qemu.pid -D /var/log/incus/test1/qemu.log -smbios type=2,manufacturer=LinuxContainers,product=Incus -runas incus
incus     185870 27.7  1.2 2476200 1198212 ?     Sl   10:59   0:29 /opt/incus/bin/qemu-system-x86_64 -S -name test2 -uuid 51c4daba-96de-4b6a-9d23-6c70515a2e31 -daemonize -cpu kvm64,popcnt,lahf_lm,clwb,clflushopt,rdrand,rdpid,sse4_1,ssbd,erms,tm,pdcm,rdseed,vmx,vnmi,pbe,stibp,pclmulqdq,movdiri,smap,pku,umip,vpclmulqdq,xsaveopt,pdpe1gb,ssse3,bmi2,invpcid,smx,ibrs,arat,fsrm,monitor,waitpkg,3dnowprefetch,avx2,dtes64,ss,xsaves,acpi,rdtscp,sse4_2,ibpb,xsave,vaes,ds_cpl,est,xtpr,aes,xgetbv1,serialize,movbe,abm,xsavec,gfni,f16c,adx,pcid,ht,tm2,avx,tsc_adjust,smep,fma,bmi1,fsgsbase,movdir64b -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /run/incus/test2/qemu.conf -spice unix=on,disable-ticketing=on,addr=/run/incus/test2/qemu.spice -pidfile /run/incus/test2/qemu.pid -D /var/log/incus/test2/qemu.log -smbios type=2,manufacturer=LinuxContainers,product=Incus -runas incus
incus     177819 29.9  1.2 2480956 1199532 ?     Sl   10:59   0:29 /opt/incus/bin/qemu-system-x86_64 -S -name test3 -uuid 93b30d23-15d4-4a3f-9ae2-4a97f0d7c489 -daemonize -cpu kvm64 -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /run/incus/test3/qemu.conf -spice unix=on,disable-ticketing=on,addr=/run/incus/test3/qemu.spice -pidfile /run/incus/test3/qemu.pid -D /var/log/incus/test3/qemu.log -smbios type=2,manufacturer=LinuxContainers,product=Incus -runas incus

Only instance test2 launched with additional cpu flags set, the other two just launched with kvm64. The host test2 launched on is the current database-leader, not sure if that has any bearing on how the flags are decided:

david@foobar ~ $ incus cluster list homecluster:
+--------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
|  NAME  |            URL            |      ROLES      | ARCHITECTURE | FAILURE DOMAIN | DESCRIPTION | STATUS |      MESSAGE      |
+--------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| incus1 | https://192.168.10.7:8443 | database        | x86_64       | default        |             | ONLINE | Fully operational |
+--------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| incus2 | https://192.168.10.8:8443 | database-leader | x86_64       | default        |             | ONLINE | Fully operational |
|        |                           | database        |              |                |             |        |                   |
+--------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| incus3 | https://192.168.10.9:8443 | database        | x86_64       | default        |             | ONLINE | Fully operational |
+--------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+

Interestingly I’m still unable to reproduce this here.

In my case, all 4 VMs got identical CPU flags with the QEMU invocation looking like:

incus       5386 54.9  0.6 1680636 407048 ?      Sl   02:54   0:21 /opt/incus/bin/qemu-system-x86_64 -S -name test1 -uuid a3f92a99-21c1-4397-a794-f349714afac6 -daemonize -cpu kvm64,fma,3dnowprefetch,fsgsbase,popcnt,f16c,adx,xsavec,clzero,wbnoinvd,xsave,avx,rdrand,sse4a,arat,rdtscp,ht,misalignsse,xgetbv1,fxsr_opt,ssse3,bmi1,rdseed,nrip_save,vmcb_clean,avx2,pfthreshold,stibp,tsc_adjust,vgif,ibpb,smap,movbe,aes,perfctr_core,smep,clwb,flushbyasid,umip,sse4_2,ssbd,clflushopt,xsaveopt,rdpid,cmp_legacy,svm,ibrs,bmi2,osvw,npt,lahf_lm,lbrv,pclmulqdq,mmxext,pdpe1gb,sse4_1,abm,xsaveerptr,tsc_scale -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /run/incus/test1/qemu.conf -spice unix=on,disable-ticketing=on,addr=/run/incus/test1/qemu.spice -pidfile /run/incus/test1/qemu.pid -D /var/log/incus/test1/qemu.log -smbios type=2,manufacturer=LinuxContainers,product=Incus -runas incus

Might be worth checking that all your servers have every server listed in /var/cache/incus/resources/ and that those .yaml do in fact include the correct CPU flags?

It looks like those cache files haven’t been updated since around July:

david@foobar ~ $ for host_name in incus{1..3}; do echo -e "\nhost ${host_name}:"; ssh ${host_name} 'sudo ls -lah /var/cache/incus/resources/'; done

host incus1:
total 76K
drwx------ 2 root root 4.0K Jul 12 01:18 .
drwx------ 4 root root 4.0K Jul 12 01:14 ..
-rw------- 1 root root  33K Jul 12 01:18 incus1.yaml
-rw------- 1 root root  16K Jul 12 01:16 incus2.yaml
-rw------- 1 root root  16K Jul 12 01:14 incus3.yaml

host incus2:
total 80K
drwx------ 2 root root 4.0K Jul 12 01:18 .
drwx------ 4 root root 4.0K Jul 12 01:18 ..
-rw------- 1 root root  33K Jul 12 01:18 incus1.yaml
-rw------- 1 root root  33K Jul 12 01:18 incus3.yaml

host incus3:
total 96K
drwx------ 2 root root 4.0K Jul 12 01:18 .
drwx------ 4 root root 4.0K Jul 12 01:15 ..
-rw------- 1 root root  33K Jul 12 01:18 incus1.yaml
-rw------- 1 root root  16K Jul 12 01:15 incus2.yaml
-rw------- 1 root root  33K Jul 12 01:18 incus3.yaml

And the contents seemed all over the place (side note, these files end in .yaml, but the content is actually json):

david@foobar ~ $ for host_name in incus{1..3}; do echo -e "\nhost ${host_name}:"; echo -n "  cpu flags count from API: "; ssh ${host_name} "sudo incus query /1.0/resources | jq .cpu.sockets[0].cores[0].flags | wc -l"; echo "  cached cpu flags count per member:"; for file_name in incus{1..3}.yaml; do echo -n "    ${file_name}: "; ssh ${host_name} "sudo cat /var/cache/incus/resources/${file_name} | jq .cpu.sockets[0].cores[0].flags | wc -l"; done; done

host incus1:
  cpu flags count from API: 139
  cached cpu flags count per member:
    incus1.yaml: 141
    incus2.yaml: 1
    incus3.yaml: 1

host incus2:
  cpu flags count from API: 139
  cached cpu flags count per member:
    incus1.yaml: 141
    incus2.yaml: cat: /var/cache/incus/resources/incus2.yaml: No such file or directory
0
    incus3.yaml: 141

host incus3:
  cpu flags count from API: 139
  cached cpu flags count per member:
    incus1.yaml: 141
    incus2.yaml: 1
    incus3.yaml: 141

I went through and removed the existing cache files and restarted the incus daemon on each of the nodes (not sure if the restart was actually necessary) and it seems to have updated them:

david@foobar ~ $ for host_name in incus{1..3}; do echo -e "\nhost ${host_name}:"; echo -n "  cpu flags count from API: "; ssh ${host_name} "sudo incus query /1.0/resources | jq .cpu.sockets[0].cores[0].flags | wc -l"; echo "  cached cpu flags count per member:"; for file_name in incus{1..3}.yaml; do echo -n "    ${file_name}: "; ssh ${host_name} "sudo cat /var/cache/incus/resources/${file_name} | jq .cpu.sockets[0].cores[0].flags | wc -l"; done; done

host incus1:
  cpu flags count from API: 139
  cached cpu flags count per member:
    incus1.yaml: cat: /var/cache/incus/resources/incus1.yaml: No such file or directory
0
    incus2.yaml: 139
    incus3.yaml: 139

host incus2:
  cpu flags count from API: 139
  cached cpu flags count per member:
    incus1.yaml: 139
    incus2.yaml: cat: /var/cache/incus/resources/incus2.yaml: No such file or directory
0
    incus3.yaml: 139

host incus3:
  cpu flags count from API: 139
  cached cpu flags count per member:
    incus1.yaml: 139
    incus2.yaml: 139
    incus3.yaml: cat: /var/cache/incus/resources/incus3.yaml: No such file or directory
0

And that seemed to fix the problem:

david@foobar ~ $ incus cluster group show homecluster:default
description: Default cluster group
members:
- incus1
- incus2
- incus3
config: {}
name: default

david@foobar ~ $ for x in {1..3}; do incus launch images:debian/12/cloud homecluster:test${x} --vm --target incus${x} -c migration.stateful=true; done
Launching test1
Launching test2
Launching test3

david@foobar ~ $ for x in {1..3}; do incus exec homecluster:test${x} -- egrep "model name|flags" /proc/cpuinfo; done
model name      : Common KVM processor
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm serialize
vmx flags       : tsc_offset vtpr
model name      : Common KVM processor
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm serialize
vmx flags       : tsc_offset vtpr
model name      : Common KVM processor
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm serialize
vmx flags       : tsc_offset vtpr

david@foobar ~ $ for x in incus{1..3}; do ssh ${x} 'ps awfux | grep qemu | grep test | grep -v grep'; done
incus     152230 22.6  0.9 2496480 975448 ?      Sl   23:19   0:20 /opt/incus/bin/qemu-system-x86_64 -S -name test1 -uuid 3c0d6f84-3202-4a35-9657-c9fc980d15c4 -daemonize -cpu kvm64,rdtscp,movdiri,ssbd,ibpb,bmi2,xsavec,est,stibp,umip,waitpkg,ibrs,adx,movdir64b,aes,3dnowprefetch,tm2,avx,fsrm,pdpe1gb,ssse3,xtpr,dtes64,monitor,vmx,xsave,rdrand,xgetbv1,pku,acpi,sse4_2,rdseed,ds_cpl,fma,tsc_adjust,avx2,ss,pbe,pdcm,rdpid,sse4_1,popcnt,tm,movbe,smap,xsaveopt,arat,vnmi,serialize,erms,gfni,vaes,ht,abm,smep,clflushopt,clwb,pclmulqdq,lahf_lm,fsgsbase,xsaves,smx,f16c,bmi1,invpcid,vpclmulqdq -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /run/incus/test1/qemu.conf -spice unix=on,disable-ticketing=on,addr=/run/incus/test1/qemu.spice -pidfile /run/incus/test1/qemu.pid -D /var/log/incus/test1/qemu.log -smbios type=2,manufacturer=LinuxContainers,product=Incus -runas incus
incus     107883 22.4  1.0 2497664 987664 ?      Sl   23:19   0:18 /opt/incus/bin/qemu-system-x86_64 -S -name test2 -uuid 1145d949-7632-4679-8381-e1b78001195d -daemonize -cpu kvm64,pdcm,abm,f16c,avx2,vnmi,gfni,umip,clwb,ssbd,adx,fsgsbase,rdseed,xsave,smep,ds_cpl,tm2,bmi2,rdpid,ibrs,xsaveopt,fsrm,tsc_adjust,waitpkg,acpi,est,xgetbv1,movdir64b,xsavec,ss,pdpe1gb,serialize,monitor,ibpb,ht,tm,aes,ssse3,smap,xsaves,arat,movbe,rdrand,dtes64,clflushopt,vmx,xtpr,stibp,popcnt,movdiri,lahf_lm,pku,fma,vaes,pclmulqdq,smx,vpclmulqdq,pbe,3dnowprefetch,bmi1,erms,invpcid,rdtscp,sse4_1,sse4_2,avx -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /run/incus/test2/qemu.conf -spice unix=on,disable-ticketing=on,addr=/run/incus/test2/qemu.spice -pidfile /run/incus/test2/qemu.pid -D /var/log/incus/test2/qemu.log -smbios type=2,manufacturer=LinuxContainers,product=Incus -runas incus
incus     106589 23.1  1.0 2508060 1008052 ?     Sl   23:19   0:17 /opt/incus/bin/qemu-system-x86_64 -S -name test3 -uuid a66c40a8-8e10-4fba-8680-0996052ff1d5 -daemonize -cpu kvm64,clwb,dtes64,ds_cpl,popcnt,abm,xsaves,ibrs,sse4_2,est,lahf_lm,3dnowprefetch,smx,fma,ht,fsgsbase,smap,movdiri,xsave,smep,adx,xsavec,aes,avx2,monitor,vnmi,ss,xtpr,movdir64b,bmi1,gfni,sse4_1,pdpe1gb,xsaveopt,rdpid,bmi2,ibpb,fsrm,acpi,f16c,waitpkg,xgetbv1,movbe,ssbd,rdtscp,tm2,stibp,clflushopt,umip,arat,vaes,serialize,pclmulqdq,ssse3,invpcid,vpclmulqdq,pku,rdseed,pdcm,tm,vmx,tsc_adjust,rdrand,erms,avx,pbe -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /run/incus/test3/qemu.conf -spice unix=on,disable-ticketing=on,addr=/run/incus/test3/qemu.spice -pidfile /run/incus/test3/qemu.pid -D /var/log/incus/test3/qemu.log -smbios type=2,manufacturer=LinuxContainers,product=Incus -runas incus

Here’s a stat from one of the cache files before I removed them:

root@incus2:~# stat /var/cache/incus/resources/incus1.yaml 
  File: /var/cache/incus/resources/incus1.yaml
  Size: 33009           Blocks: 72         IO Block: 4096   regular file
Device: 252,1   Inode: 14552717    Links: 1
Access: (0600/-rw-------)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2024-10-20 10:51:38.133194515 -0500
Modify: 2024-07-12 01:18:52.150842038 -0500
Change: 2024-07-12 01:18:52.150842038 -0500
 Birth: 2024-07-12 01:18:44.858380284 -0500

I think this line here:

Should be something like:

			if err == nil && time.Now().Add(-time.Hour).Before(fi.ModTime()) {

if the intention is to refresh the cache every hour. It looks like it’s currently evaluating to true regardless of how old the file is.

Yeah, looks like the issue, I’ll go with:

if err == nil && time.Now().Sub(fi.ModTime()) < time.Hour {

On the YAML front, not sure why we’re doing JSON in a .yaml, but it’s still valid as YAML is a superset of JSON. We may want to actually switch to a YAML writer and reader though :slight_smile: