Ozymandias
(Ozymandias)
February 18, 2022, 2:22pm
1
I’ve a python (plxd) script which spins up a new instance, then on success deletes the previous version and then sets up new proxies to the services.
What I’m finding is that even though the previous instance is deleted a fork proxy process is left active and I can’t activate a new proxy on the new instance until I kill it. I work around the problem by removing the proxies on the original instance first before deleting it.
Now is this something connected to using pylxd that doesn’t occur with the lxc toolset or is it a known issue with LXD?. Orperhaps a side effect of lxd delete --force ?
tomp
(Thomas Parrott)
February 18, 2022, 3:09pm
2
Please can you log your reproducer steps here https://github.com/lxc/lxd/issues
Thanks
Ozymandias
(Ozymandias)
February 18, 2022, 4:19pm
3
I’ll have a go at making a simple reproducible case when I have the time…
1 Like
tomp
(Thomas Parrott)
February 21, 2022, 2:14pm
4
This looks related, was that your post?
opened 11:21AM - 21 Feb 22 UTC
<!--
Github issues are used for bug reports. For support questions, please use … [our forum](https://discuss.linuxcontainers.org).
Please fill the template below as it will greatly help us track down your issue and reproduce it on our side.
Feel free to remove anything which doesn't apply to you and add more information where it makes sense.
-->
# Required information
* RHEL:
* 8.3:
```
config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
addresses: []
architectures:
- x86_64
- i686
certificate: |
-----BEGIN CERTIFICATE-----
MIICEzCCAZigAwIBAgIQdDV0HglDbbijzN60BSnCxDAKBggqhkjOPQQDAzA5MRww
GgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMRkwFwYDVQQDDBByb290QGNhY2hl
LWRldjAxMB4XDTIxMTIxMzE0MjYyNFoXDTMxMTIxMTE0MjYyNFowOTEcMBoGA1UE
ChMTbGludXhjb250YWluZXJzLm9yZzEZMBcGA1UEAwwQcm9vdEBjYWNoZS1kZXYw
MTB2MBAGByqGSM49AgEGBSuBBAAiA2IABMa+4zytRlrW2D2YTAi40Ov+fkCmXNcy
luuGo1HVcT5pBHZjOnpwMjgf9NPRoJPHROk//Grt4F70FKO/DEzsGXf0T5i/4KZP
/gKvmpgQRPBid79r9SCV9DdcY4wmhQYIBKNlMGMwDgYDVR0PAQH/BAQDAgWgMBMG
A1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwLgYDVR0RBCcwJYILY2Fj
aGUtZGV2MDGHBH8AAAGHEAAAAAAAAAAAAAAAAAAAAAEwCgYIKoZIzj0EAwMDaQAw
ZgIxAJAyHg5GF2srIR8ml4LDYoS4GG8waq7jKPBFZc5+KyYoayXzQ3Qu9ZLFTB6x
TZYlTwIxAIWOu5i6aROJYn+jgKWw6G5le7e/F2OSqZ5RRzQxNFx8XG9tEKsES6n+
e/6PoC877A==
-----END CERTIFICATE-----
certificate_fingerprint: 49dcea0ec20bb22360fc1e396f0826b54d6cede16160ac9af1dab2bec67306b5
driver: lxc
driver_version: 4.0.12
firewall: xtables
kernel: Linux
kernel_architecture: x86_64
kernel_features:
netnsid_getifaddrs: "true"
seccomp_listener: "false"
seccomp_listener_continue: "false"
shiftfs: "false"
uevent_injection: "true"
unpriv_fscaps: "true"
kernel_version: 4.18.0-240.10.1.el8_3.x86_64
lxc_features:
cgroup2: "true"
core_scheduling: "true"
devpts_fd: "true"
idmapped_mounts_v2: "true"
mount_injection_file: "true"
network_gateway_device_route: "true"
network_ipvlan: "true"
network_l2proxy: "true"
network_phys_macvlan_mtu: "true"
network_veth_router: "true"
pidfd: "true"
seccomp_allow_deny_syntax: "true"
seccomp_notify: "true"
seccomp_proxy_send_notify_fd: "true"
os_name: Red Hat Enterprise Linux
os_version: "8.3"
project: default
server: lxd
server_clustered: false
server_name: cache-dev01
server_pid: 9744
server_version: "4.23"
storage: dir
storage_version: "1"
storage_supported_drivers:
- name: btrfs
version: 5.4.1
remote: false
- name: cephfs
version: 15.2.14
remote: true
- name: dir
version: "1"
remote: false
- name: lvm
version: 2.03.07(2) (2019-11-30) / 1.02.167 (2019-11-30) / 4.42.0
remote: false
- name: ceph
version: 15.2.14
remote: true
```
# Issue description
After creating a LXD instance using the AlmaLinux cloud image I've created a proxy from the host to the instance from host:8080 to instance:80. This works and no problems to this point. However when I force delete the running instance the forkproxy for the [prt 8080->80 still remains and prevents the creation of a new proxy on the same ports to a new container.
# Steps to reproduce
1. Create a cloud instance container and assign a TCP proxy port I use a python script and a pylxd method of..
```
con = lxd.containers.get(name)
con.devices['fcgi808'+id] = {'connect': 'tcp:127.0.0.1:80', 'listen': 'tcp:0.0.0.0:808'+id, 'type': 'proxy'}
res = con.save()
```
2. Confirm the proxy works.. (Mines bound to nginx on port 80) using curl or wget to localhost:8080 on the host
3. lxd delete --force <name of instance>
4. confirm that the instance has been deleted using 'lxc list'
5. check the status with ps that the forkproxy
e.g.
` 1000000 138075 9744 0 10:03 ? 00:00:00 /snap/lxd/current/bin/lxd forkproxy -- 9744 -1 tcp:0.0.0.0:8080 131212 -1 tcp:127.0.0.1:80 0644`
Ozymandias
(Ozymandias)
February 21, 2022, 2:24pm
5
Yes thats mine… Though it does the same thing without --force I’ve discovered…