How to limit network ingress and egress on a running container

I was trying to limit a container ingress as described from this external resource here:

lxc config device set mycontainer1 eth0 limits.ingress 1Mbit
Error: Device from profile(s) cannot be modified for individual instance. Override device or modify profile instead

Whats the proper way to rate-limit ingress and egress for my running container. This is related to the question here: Find network usage for a container

Change “set” to “override” in that command.

lxc config device override t1 eth0 limits.ingress 1Mbit
Error: No value found in "limits.ingress"

Still no luck?

What nic type is it?

eth0:
name: eth0
nictype: bridged
parent: br0
type: nic

Try

lxc config device override t1 eth0 limits.ingress=1Mbit

As that command allows overriding multiple keys at once (one per argument) and therefore requires key=value syntax for each argument.

This seemed to have effect!

lxc config device override t1 eth0 limits.ingress=1Mbit
Device eth0 overridden for t1

But if I try to set the egress now:

lxc config device override t1 eth0 limits.egress=1Mbit
Error: The device already exists

Do I need to set it all at once?

You can set both via the initial override command. But after the device is overridden (copied from the profile into the instance) you then need to use the “set” command.

1 Like

Right, thanx @tomp

If I edit the default profile with these limits, according to the docs, its applied automatically to all instances once changed.

Is this true, and/or do I need to restart the containers to get the rate limits to apply?

Thats right. The bandwidth limits can be applied without resetting the nic.

1 Like

But does a change to a profile (lets say the default) instantly / automatically gets applied aswell to all instances using it?

Yes

1 Like

Thanx for all the help @tomp ! This is what I ultimately did to limit networking capacity on LXD containers, as a reference for others. (Here is a link to the references on available limits for networking)

  1. First, I investigate the individual container. Note the “-e” which is needed to see any settings derived from the profiles as they are not shown otherwise:
lxc config show -e mycontainer
  1. If you need to modify the individual container, this is done in two steps. In the example below, we limit the container to 1Mbit both upload & download:
lxc config device override mycontainer eth0 limits.ingress=1Mbit
lxc config device set mycontainer eth0 limits.egress=1Mbit

No container restart needed. This comes into effect immediately. You can test this with the tool speedtest-cli

If you like to set this for all containers in the default profile, then:

Edit the profile:
lxc config profile edit default

Add limits.egress & limits.ingress to the device as below.

Note that the changes will take effect immediately for all containers using the default container without container restart needed.

config: {}
description: Default LXD profile
devices:
  eth0:
    limits.egress: 1Mbit
    limits.ingress: 1Mbit
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
1 Like

Excellent, glad it worked out.

1 Like

This is very much needed and very helpful.

How would I enforce this also for “sub-projects”? I mean, is there a hierarchy to how these settings can be enforced so to protect the network from overload?

It doesn’t look like cumulative NIC limits are covered by project limits currently:

So, this would be needed to be set on all projects? How does an admin then enforce it for multiple clients using the same lxd-instance/cluster? I mean, it would be trivial to take down a complete environment just consuming all network capacity for any project.

You can’t at this time set per-project cumulative network limits for all instances within that project.

I understand. This would be a challenging situation then for multiple users/projects.

Sounds like a feature request :slight_smile:

1 Like