Is there any way to revert profile changes?

I have a container which has default profile attached to it. It has a separate volume attached to it as well and other devices added through configuration settings of the container.

But somehow by mistake I emptied the default profile. Right now if I do lxc config show on the container, it shows the devices and storage volume added on it.

So I have 2 questions:

  1. If I reboot the container then will there be any issues? Will I loose the data?
  2. Is there any way to revert to the previous version of default profile?

Kindly help.

Can you show:

  • lxc profile show default
  • lxc config show NAME
  • lxc config show NAME --expanded

That should clarify what’s part of local config vs coming from profile.
Most profile changes are applied live to the instances, so in general things should be okay after a reboot as if something was going to break, it should have already.

Hi @stgraber

lxc profile show default

config: {}
description: ""
devices: {}
name: default
used_by:
- /1.0/instances/lxc-8bea6292

lxc config show lxc-8bea6292

architecture: x86_64
config:
  image.architecture: x86_64
  image.description: Ubuntu 20.04 LTS server (20201210)
  image.os: ubuntu
  image.release: focal
  limits.cpu: "8"
  limits.memory: 65GB
  security.nesting: "true"
  security.privileged: "true"
  volatile.base_image: f5bbc0577d62171ad59d6ac2b39d51c76bb54bfff1d7a403f287a6b56d00e5f7
  volatile.cloud-init.instance-id: 5315645a-eeb0-478e-a959-8c48b2067073
  volatile.idmap.base: "0"
  volatile.idmap.current: '[]'
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.q1new.host_name: veth3273bc94
  volatile.q1new.hwaddr: 00:16:3e:c7:3b:76
  volatile.q1new.name: eth0
  volatile.uuid: 99e831b2-4374-4af7-a9bc-22755d714d25
devices:
  EXTRA_PORT-59695:
    connect: tcp:10.219.195.157:59695
    listen: tcp:0.0.0.0:59695
    type: proxy
  JLAB:
    connect: tcp:10.219.195.157:55016
    listen: tcp:0.0.0.0:55016
    type: proxy
  SSH:
    connect: tcp:10.219.195.157:22
    listen: tcp:0.0.0.0:50267
    type: proxy
  q1new:
    network: q1new
    type: nic
  root:
    path: /
    pool: vol-8bea6292
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
lxc config show lxc-8bea6292 --expanded
architecture: x86_64
config:
  image.architecture: x86_64
  image.description: Ubuntu 20.04 LTS server (20201210)
  image.os: ubuntu
  image.release: focal
  limits.cpu: "8"
  limits.memory: 65GB
  security.nesting: "true"
  security.privileged: "true"
  volatile.base_image: f5bbc0577d62171ad59d6ac2b39d51c76bb54bfff1d7a403f287a6b56d00e5f7
  volatile.cloud-init.instance-id: 5315645a-eeb0-478e-a959-8c48b2067073
  volatile.idmap.base: "0"
  volatile.idmap.current: '[]'
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.q1new.host_name: veth24141854
  volatile.q1new.hwaddr: 00:16:3e:c7:3b:76
  volatile.q1new.name: eth0
  volatile.uuid: 99e831b2-4374-4af7-a9bc-22755d714d25
devices:
  EXTRA_PORT-59695:
    connect: tcp:10.219.195.157:59695
    listen: tcp:0.0.0.0:59695
    type: proxy
  JLAB:
    connect: tcp:10.219.195.157:55016
    listen: tcp:0.0.0.0:55016
    type: proxy
  SSH:
    connect: tcp:10.219.195.157:22
    listen: tcp:0.0.0.0:50267
    type: proxy
  gpu0:
    pci: "0000:19:00.0"
    type: gpu
  q1new:
    network: q1new
    type: nic
  root:
    path: /
    pool: vol-8bea6292
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

I just did a reboot and was able to restart the container. But there’s another problem now.

As you can see I have forwarded ssh port and additional couple of ports for services that I access publicly. Now I am not able to access them. Appears to be that the network is not working as expected. Although I am able to run ping google.com inside the container to see if network is available inside. It’s working fine. But I am not able to access ssh or any other service from outside.

Kindly help.

Your default profile no longer has a root disk device so any new instances launched using that profile won’t work. Nor does it have an eth0 NIC device.

Suggest fixing this via (assuming you want a default root disk and eth0 NIC):

lxc profile device add root disk source=<pool name> path=/
lxc profile device add eth0 nic network=<network name>

Your existing instances are running because they have their own explicit root disk device that doesn’t come from the profile.

Please can you show lxc network show q1new and ip a and ip r on the host and inside the container.

Hi @tomp

lxc network show q1new

config:
  ipv4.address: 10.219.195.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:48ae:27fd:ffb7::1/64
  ipv6.nat: "true"
description: ""
name: q1new
type: bridge
used_by:
- /1.0/instances/lxc-8bea6292
managed: true
status: Created
locations:
- none

On Host:

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d4:5d:64:bb:4d:6d brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.46/24 brd 192.168.20.255 scope global dynamic noprefixroute enp2s0
       valid_lft 82452sec preferred_lft 82452sec
    inet6 fe80::6d82:66e4:b910:a3c3/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether d4:5d:64:bb:4d:6c brd ff:ff:ff:ff:ff:ff
4: default-qbnet: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:ef:b3:8b brd ff:ff:ff:ff:ff:ff
    inet 10.216.25.1/24 scope global default-qbnet
       valid_lft forever preferred_lft forever
5: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:d1:c8:c9 brd ff:ff:ff:ff:ff:ff
    inet 10.22.103.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:5a25:35d0:1771::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fed1:c8c9/64 scope link 
       valid_lft forever preferred_lft forever
6: net-26dd0dbc6e: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:48:ee:29 brd ff:ff:ff:ff:ff:ff
    inet 10.153.7.1/24 scope global net-26dd0dbc6e
       valid_lft forever preferred_lft forever
    inet6 fd42:faae:4d4a:36c4::1/64 scope global 
       valid_lft forever preferred_lft forever
7: q1new: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:fc:c5:cd brd ff:ff:ff:ff:ff:ff
    inet 10.219.195.1/24 scope global q1new
       valid_lft forever preferred_lft forever
    inet6 fd42:48ae:27fd:ffb7::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fefc:c5cd/64 scope link 
       valid_lft forever preferred_lft forever
8: qBack: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:e1:d3:ca brd ff:ff:ff:ff:ff:ff
    inet 10.72.67.1/24 scope global qBack
       valid_lft forever preferred_lft forever
    inet6 fd42:d650:175d:d60c::1/64 scope global 
       valid_lft forever preferred_lft forever
9: qnet1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:ce:0c:f5 brd ff:ff:ff:ff:ff:ff
    inet 10.142.211.1/24 scope global qnet1
       valid_lft forever preferred_lft forever
    inet6 fd42:9dec:70c5:d72c::1/64 scope global 
       valid_lft forever preferred_lft forever
10: qbnet26dd0dbc6e: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:49:6b:44 brd ff:ff:ff:ff:ff:ff
    inet 10.146.73.1/24 scope global qbnet26dd0dbc6e
       valid_lft forever preferred_lft forever
    inet6 fd42:1a97:aa0a:1f0b::1/64 scope global 
       valid_lft forever preferred_lft forever
11: qbnet6081594975: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:0b:19:16 brd ff:ff:ff:ff:ff:ff
    inet 10.59.41.1/24 scope global qbnet6081594975
       valid_lft forever preferred_lft forever
    inet6 fd42:6f5f:1670:5068::1/64 scope global 
       valid_lft forever preferred_lft forever
12: qbnet7f6ffaa6bb: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:eb:a3:9c brd ff:ff:ff:ff:ff:ff
    inet 10.89.77.1/24 scope global qbnet7f6ffaa6bb
       valid_lft forever preferred_lft forever
    inet6 fd42:1aab:8146:c4e6::1/64 scope global 
       valid_lft forever preferred_lft forever
13: qbnet8df707a948: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:c5:95:af brd ff:ff:ff:ff:ff:ff
    inet 10.24.34.1/24 scope global qbnet8df707a948
       valid_lft forever preferred_lft forever
    inet6 fd42:3f8d:8378:4097::1/64 scope global 
       valid_lft forever preferred_lft forever
14: qbnet9fe8593a8a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:2f:fb:16 brd ff:ff:ff:ff:ff:ff
    inet 10.20.117.1/24 scope global qbnet9fe8593a8a
       valid_lft forever preferred_lft forever
    inet6 fd42:f4e3:53a6:50cf::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe2f:fb16/64 scope link 
       valid_lft forever preferred_lft forever
15: qbnetfde9264cf3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:79:86:c6 brd ff:ff:ff:ff:ff:ff
    inet 10.72.73.1/24 scope global qbnetfde9264cf3
       valid_lft forever preferred_lft forever
    inet6 fd42:d2df:7943:210b::1/64 scope global 
       valid_lft forever preferred_lft forever
16: qnetf: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:3b:6e:62 brd ff:ff:ff:ff:ff:ff
    inet 10.144.52.1/24 scope global qnetf
       valid_lft forever preferred_lft forever
    inet6 fd42:7df2:ff58:2352::1/64 scope global 
       valid_lft forever preferred_lft forever
17: qnettest: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:4c:18:19 brd ff:ff:ff:ff:ff:ff
    inet 10.40.170.1/24 scope global qnettest
       valid_lft forever preferred_lft forever
    inet6 fd42:28ef:f45b:980::1/64 scope global 
       valid_lft forever preferred_lft forever
22: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:a1:4e:3d:dc brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:a1ff:fe4e:3ddc/64 scope link 
       valid_lft forever preferred_lft forever
56: veth0b832f4e@if55: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master qbnet9fe8593a8a state UP group default qlen 1000
    link/ether c2:41:69:7b:30:f2 brd ff:ff:ff:ff:ff:ff link-netnsid 1
575: vetha96f704@if574: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 52:91:52:36:cb:55 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::5091:52ff:fe36:cb55/64 scope link 
       valid_lft forever preferred_lft forever
79: qnet2bd7f907b7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:1c:7b:90 brd ff:ff:ff:ff:ff:ff
    inet 10.229.82.1/24 scope global qnet2bd7f907b7
       valid_lft forever preferred_lft forever
611: veth24141854@if610: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master q1new state UP group default qlen 1000
    link/ether ae:0e:49:d8:dc:e9 brd ff:ff:ff:ff:ff:ff link-netnsid 3
ip r
default via 192.168.20.1 dev enp2s0 proto dhcp metric 20100 
10.20.117.0/24 dev qbnet9fe8593a8a proto kernel scope link src 10.20.117.1 
10.22.103.0/24 dev lxdbr0 proto kernel scope link src 10.22.103.1 linkdown 
10.24.34.0/24 dev qbnet8df707a948 proto kernel scope link src 10.24.34.1 linkdown 
10.40.170.0/24 dev qnettest proto kernel scope link src 10.40.170.1 linkdown 
10.59.41.0/24 dev qbnet6081594975 proto kernel scope link src 10.59.41.1 linkdown 
10.72.67.0/24 dev qBack proto kernel scope link src 10.72.67.1 linkdown 
10.72.73.0/24 dev qbnetfde9264cf3 proto kernel scope link src 10.72.73.1 linkdown 
10.89.77.0/24 dev qbnet7f6ffaa6bb proto kernel scope link src 10.89.77.1 linkdown 
10.142.211.0/24 dev qnet1 proto kernel scope link src 10.142.211.1 linkdown 
10.144.52.0/24 dev qnetf proto kernel scope link src 10.144.52.1 linkdown 
10.146.73.0/24 dev qbnet26dd0dbc6e proto kernel scope link src 10.146.73.1 linkdown 
10.153.7.0/24 dev net-26dd0dbc6e proto kernel scope link src 10.153.7.1 linkdown 
10.216.25.0/24 dev default-qbnet proto kernel scope link src 10.216.25.1 linkdown 
10.219.195.0/24 dev q1new proto kernel scope link src 10.219.195.1 
10.229.82.0/24 dev qnet2bd7f907b7 proto kernel scope link src 10.229.82.1 linkdown 
169.254.0.0/16 dev default-qbnet scope link metric 1000 linkdown 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.20.0/24 dev enp2s0 proto kernel scope link src 192.168.20.46 metric 100

Inside container:

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: br-20f051f22af8: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:cf:7b:5f:3a brd ff:ff:ff:ff:ff:ff
    inet 192.168.49.1/24 brd 192.168.49.255 scope global br-20f051f22af8
       valid_lft forever preferred_lft forever
3: br-218278bef006: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:e8:e8:04:0e brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global br-218278bef006
       valid_lft forever preferred_lft forever
    inet6 fc00:f853:ccd:e793::1/64 scope global tentative 
       valid_lft forever preferred_lft forever
    inet6 fe80::1/64 scope link tentative 
       valid_lft forever preferred_lft forever
4: br-c15389d485d5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:bb:65:59:84 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-c15389d485d5
       valid_lft forever preferred_lft forever
    inet6 fe80::42:bbff:fe65:5984/64 scope link 
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:e6:be:a8:48 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
7: vethcba483c@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-c15389d485d5 state UP group default 
    link/ether 9a:0f:09:89:a3:4f brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::980f:9ff:fe89:a34f/64 scope link 
       valid_lft forever preferred_lft forever
610: eth0@if611: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:c7:3b:76 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.219.195.36/24 brd 10.219.195.255 scope global dynamic eth0
       valid_lft 3163sec preferred_lft 3163sec
    inet6 fd42:48ae:27fd:ffb7:216:3eff:fec7:3b76/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 3208sec preferred_lft 3208sec
    inet6 fe80::216:3eff:fec7:3b76/64 scope link 
       valid_lft forever preferred_lft forever
ip r
default via 10.219.195.1 dev eth0 proto dhcp src 10.219.195.36 metric 100 
10.219.195.0/24 dev eth0 proto kernel scope link src 10.219.195.36 
10.219.195.1 dev eth0 proto dhcp scope link src 10.219.195.36 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
172.18.0.0/16 dev br-c15389d485d5 proto kernel scope link src 172.18.0.1 
172.19.0.0/16 dev br-218278bef006 proto kernel scope link src 172.19.0.1 linkdown 
192.168.49.0/24 dev br-20f051f22af8 proto kernel scope link src 192.168.49.1 linkdown ```

When I try to ssh to the container, it says:

ssh_exchange_identification: Connection closed by remote host

But inside the container everything related to network is working such as ping or wget.

You have docker on the host, have you checked that its firewall modifications aren’t affecting LXDs networking?

See How to configure your firewall - LXD documentation and LXD and Docker Firewall Redux - How to deal with FORWARD policy set to drop - #3 by tomp

@tomp Since I already have an explicit root disk device for the container, should I add another disk device to default profile? Wouldn’t that conflict with the existing root device of this container?

I am running more containers with their own specific profiles. So not depending on default anymore for any new containers. Just this particular container is of concern right now. Kindly help. Unable to access the container from outside.

Thanks @tomp I don’t think that docker firewall is a concern as it was working perfectly till yesterday when I mistakenly emptied the default profile.

There’s no need to add anything to the default profile you don’t need.
If you did add a root disk size then any existing one in the instance’s config would take precedent anyway and the profile’s one would be ignored.

Got it. Thanks. Just the issue left now is networking…

I would rule that out first though as applying firewalls in different orders can trigger issues after it has been working fine for months.

Please show output of sudo iptables-save and sudo nft list ruleset.

sudo iptables-save
# Generated by iptables-save v1.6.1 on Fri Sep  2 09:54:03 2022
*raw
:PREROUTING ACCEPT [3156309215:15567823694937]
:OUTPUT ACCEPT [218011251:428434775341]
COMMIT
# Completed on Fri Sep  2 09:54:03 2022
# Generated by iptables-save v1.6.1 on Fri Sep  2 09:54:03 2022
*mangle
:PREROUTING ACCEPT [3156309215:15567823694937]
:INPUT ACCEPT [216360370:482305396064]
:FORWARD ACCEPT [2939948901:15085518328254]
:OUTPUT ACCEPT [218011251:428434775341]
:POSTROUTING ACCEPT [3158031412:15513958510674]
COMMIT
# Completed on Fri Sep  2 09:54:03 2022
# Generated by iptables-save v1.6.1 on Fri Sep  2 09:54:03 2022
*nat
:PREROUTING ACCEPT [194475:15020809]
:INPUT ACCEPT [66697:5802567]
:OUTPUT ACCEPT [53218:3638166]
:POSTROUTING ACCEPT [52166:3561691]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Fri Sep  2 09:54:03 2022
# Generated by iptables-save v1.6.1 on Fri Sep  2 09:54:03 2022
*filter
:INPUT ACCEPT [1430068:418255837]
:FORWARD ACCEPT [27067301:128374488047]
:OUTPUT ACCEPT [1721291:962377359]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Fri Sep  2 09:54:03 2022```

sudo nft list ruleset


sudo: nft: command not found

Also, can snap lxd service restart help with regaining networking on the container?

Found the problem. As the default profile got updated, the eth0 IP address was also updated on the container. But the config file was still old and had the port forwarding for SSH and other services on previous IP address. So I was not able to access it from outside even when there was network inside.

Now I changed the IP address for port forwarding configurations of the container to the latest eth0 ip address and it is now working smoothly.

Thanks @tomp and @stgraber for the advice above.

1 Like