/run/udev/data fills up

Hi all,

On my CI servers the /run filesystem fills up (slowly, but it gets there). The culprit is /run/udev/data, which contains around 380k files. 378k of this files are of the form “+cgroup:files_cache(8503011:buildrod-kball-bts)”. All the entries seem to have in common that their name consists of some container I once started and an ever changing number. It is not only files_cache, there are a couple of other cgroups as well like anon_vm, kmalloc-{16,32,...}, dentry or task_struct.

A cuple of questions:

  • Any idea what causes this problem and how to fix it? The www suggested “rm -rf” and “udevadm info --cleanup-db
  • What exactly is the “udev database” that lives in /run/udev/data?

Some information about the system:

  • Ubuntu 18.04
  • Custom realtime kernel 4.9
  • lxd 3.0.3

The system creates a new container for every build, runs the build inside the container and then throws the container away.

Sounds like a udev bug to me.

The udev database is used to track all devices on the system, mixing in data that came from the kernel with data that was pulled from the device directly with even more data fed by various udev scripts.

It’s then used by a variety of applications to locate devices.

You definitely should not just wipe the database as restarting udev cannot populate data that only shows up when the device first appears. This may cause a variety of devices to effectively disappear as far as other software on the system is concerned.

I’m really not sure why udev would be tracking cgroup files though and more concerning, why it wouldn’t remove those from the database when the container disappears…

It may be worth asking udev upstream about what’s going on here.

As for cleanup, I seem to remember the udev database being effectively clear text files. If you can figure out a pattern in filename which contains that useless, ever growing data, then deleting just those files should be safe.

That seems like a udev bug. I’d report this issue upstream as @stgraber said.