VM from disk Image

Thanks! I should have noticed that from your previous comment. Launching the image now is getting stuck. I’m running: lxc launch hassio2 ha --vm --debug

I also tried: lxc launch hassio2 ha --vm -v --debug -c security.secureboot=false

Both are stopping at:

DBUG[06-28|16:11:28] Got operation from LXD
DBUG[06-28|16:11:28]
        {
                "id": "a54785f5-fed6-4937-88ba-34c73906f06a",
                "class": "task",
                "description": "Creating instance",
                "created_at": "2021-06-28T16:11:28.096741107+01:00",
                "updated_at": "2021-06-28T16:11:28.096741107+01:00",
                "status": "Running",
                "status_code": 103,
                "resources": {
                        "instances": [
                                "/1.0/instances/ha"
                        ]
                },
                "metadata": null,
                "may_cancel": false,
                "err": "",
                "location": "none"
        }
DBUG[06-28|16:11:28] Sending request to LXD                   method=GET url=http://unix.socket/1.0/operations/a54785f5-fed6-4937-88ba-34c73906f06a etag=
DBUG[06-28|16:11:28] Got response struct from LXD
DBUG[06-28|16:11:28]
        {
                "id": "a54785f5-fed6-4937-88ba-34c73906f06a",
                "class": "task",
                "description": "Creating instance",
                "created_at": "2021-06-28T16:11:28.096741107+01:00",
                "updated_at": "2021-06-28T16:11:28.096741107+01:00",
                "status": "Running",
                "status_code": 103,
                "resources": {
                        "instances": [
                                "/1.0/instances/ha"
                        ]
                },
                "metadata": null,
                "may_cancel": false,
                "err": "",
                "location": "none"
        }

Here is a full debug log:

root@iowetea:~# lxc launch hassio2 ha --vm -v --debug
DBUG[06-28|16:11:28] Connecting to a local LXD over a Unix socket
DBUG[06-28|16:11:28] Sending request to LXD                   method=GET url=http://unix.socket/1.0 etag=
DBUG[06-28|16:11:28] Got response struct from LXD
DBUG[06-28|16:11:28]
        {
                "config": {},
                "api_extensions": [
                        "storage_zfs_remove_snapshots",
                        "container_host_shutdown_timeout",
                        "container_stop_priority",
                        "container_syscall_filtering",
                        "auth_pki",
                        "container_last_used_at",
                        "etag",
                        "patch",
                        "usb_devices",
                        "https_allowed_credentials",
                        "image_compression_algorithm",
                        "directory_manipulation",
                        "container_cpu_time",
                        "storage_zfs_use_refquota",
                        "storage_lvm_mount_options",
                        "network",
                        "profile_usedby",
                        "container_push",
                        "container_exec_recording",
                        "certificate_update",
                        "container_exec_signal_handling",
                        "gpu_devices",
                        "container_image_properties",
                        "migration_progress",
                        "id_map",
                        "network_firewall_filtering",
                        "network_routes",
                        "storage",
                        "file_delete",
                        "file_append",
                        "network_dhcp_expiry",
                        "storage_lvm_vg_rename",
                        "storage_lvm_thinpool_rename",
                        "network_vlan",
                        "image_create_aliases",
                        "container_stateless_copy",
                        "container_only_migration",
                        "storage_zfs_clone_copy",
                        "unix_device_rename",
                        "storage_lvm_use_thinpool",
                        "storage_rsync_bwlimit",
                        "network_vxlan_interface",
                        "storage_btrfs_mount_options",
                        "entity_description",
                        "image_force_refresh",
                        "storage_lvm_lv_resizing",
                        "id_map_base",
                        "file_symlinks",
                        "container_push_target",
                        "network_vlan_physical",
                        "storage_images_delete",
                        "container_edit_metadata",
                        "container_snapshot_stateful_migration",
                        "storage_driver_ceph",
                        "storage_ceph_user_name",
                        "resource_limits",
                        "storage_volatile_initial_source",
                        "storage_ceph_force_osd_reuse",
                        "storage_block_filesystem_btrfs",
                        "resources",
                        "kernel_limits",
                        "storage_api_volume_rename",
                        "macaroon_authentication",
                        "network_sriov",
                        "console",
                        "restrict_devlxd",
                        "migration_pre_copy",
                        "infiniband",
                        "maas_network",
                        "devlxd_events",
                        "proxy",
                        "network_dhcp_gateway",
                        "file_get_symlink",
                        "network_leases",
                        "unix_device_hotplug",
                        "storage_api_local_volume_handling",
                        "operation_description",
                        "clustering",
                        "event_lifecycle",
                        "storage_api_remote_volume_handling",
                        "nvidia_runtime",
                        "container_mount_propagation",
                        "container_backup",
                        "devlxd_images",
                        "container_local_cross_pool_handling",
                        "proxy_unix",
                        "proxy_udp",
                        "clustering_join",
                        "proxy_tcp_udp_multi_port_handling",
                        "network_state",
                        "proxy_unix_dac_properties",
                        "container_protection_delete",
                        "unix_priv_drop",
                        "pprof_http",
                        "proxy_haproxy_protocol",
                        "network_hwaddr",
                        "proxy_nat",
                        "network_nat_order",
                        "container_full",
                        "candid_authentication",
                        "backup_compression",
                        "candid_config",
                        "nvidia_runtime_config",
                        "storage_api_volume_snapshots",
                        "storage_unmapped",
                        "projects",
                        "candid_config_key",
                        "network_vxlan_ttl",
                        "container_incremental_copy",
                        "usb_optional_vendorid",
                        "snapshot_scheduling",
                        "snapshot_schedule_aliases",
                        "container_copy_project",
                        "clustering_server_address",
                        "clustering_image_replication",
                        "container_protection_shift",
                        "snapshot_expiry",
                        "container_backup_override_pool",
                        "snapshot_expiry_creation",
                        "network_leases_location",
                        "resources_cpu_socket",
                        "resources_gpu",
                        "resources_numa",
                        "kernel_features",
                        "id_map_current",
                        "event_location",
                        "storage_api_remote_volume_snapshots",
                        "network_nat_address",
                        "container_nic_routes",
                        "rbac",
                        "cluster_internal_copy",
                        "seccomp_notify",
                        "lxc_features",
                        "container_nic_ipvlan",
                        "network_vlan_sriov",
                        "storage_cephfs",
                        "container_nic_ipfilter",
                        "resources_v2",
                        "container_exec_user_group_cwd",
                        "container_syscall_intercept",
                        "container_disk_shift",
                        "storage_shifted",
                        "resources_infiniband",
                        "daemon_storage",
                        "instances",
                        "image_types",
                        "resources_disk_sata",
                        "clustering_roles",
                        "images_expiry",
                        "resources_network_firmware",
                        "backup_compression_algorithm",
                        "ceph_data_pool_name",
                        "container_syscall_intercept_mount",
                        "compression_squashfs",
                        "container_raw_mount",
                        "container_nic_routed",
                        "container_syscall_intercept_mount_fuse",
                        "container_disk_ceph",
                        "virtual-machines",
                        "image_profiles",
                        "clustering_architecture",
                        "resources_disk_id",
                        "storage_lvm_stripes",
                        "vm_boot_priority",
                        "unix_hotplug_devices",
                        "api_filtering",
                        "instance_nic_network",
                        "clustering_sizing",
                        "firewall_driver",
                        "projects_limits",
                        "container_syscall_intercept_hugetlbfs",
                        "limits_hugepages",
                        "container_nic_routed_gateway",
                        "projects_restrictions",
                        "custom_volume_snapshot_expiry",
                        "volume_snapshot_scheduling",
                        "trust_ca_certificates",
                        "snapshot_disk_usage",
                        "clustering_edit_roles",
                        "container_nic_routed_host_address",
                        "container_nic_ipvlan_gateway",
                        "resources_usb_pci",
                        "resources_cpu_threads_numa",
                        "resources_cpu_core_die",
                        "api_os",
                        "container_nic_routed_host_table",
                        "container_nic_ipvlan_host_table",
                        "container_nic_ipvlan_mode",
                        "resources_system",
                        "images_push_relay",
                        "network_dns_search",
                        "container_nic_routed_limits",
                        "instance_nic_bridged_vlan",
                        "network_state_bond_bridge",
                        "usedby_consistency",
                        "custom_block_volumes",
                        "clustering_failure_domains",
                        "resources_gpu_mdev",
                        "console_vga_type",
                        "projects_limits_disk",
                        "network_type_macvlan",
                        "network_type_sriov",
                        "container_syscall_intercept_bpf_devices",
                        "network_type_ovn",
                        "projects_networks",
                        "projects_networks_restricted_uplinks",
                        "custom_volume_backup",
                        "backup_override_name",
                        "storage_rsync_compression",
                        "network_type_physical",
                        "network_ovn_external_subnets",
                        "network_ovn_nat",
                        "network_ovn_external_routes_remove",
                        "tpm_device_type",
                        "storage_zfs_clone_copy_rebase",
                        "gpu_mdev",
                        "resources_pci_iommu",
                        "resources_network_usb",
                        "resources_disk_address",
                        "network_physical_ovn_ingress_mode",
                        "network_ovn_dhcp",
                        "network_physical_routes_anycast",
                        "projects_limits_instances",
                        "network_state_vlan",
                        "instance_nic_bridged_port_isolation",
                        "instance_bulk_state_change",
                        "network_gvrp",
                        "instance_pool_move",
                        "gpu_sriov",
                        "pci_device_type",
                        "storage_volume_state",
                        "network_acl",
                        "migration_stateful",
                        "disk_state_quota",
                        "storage_ceph_features",
                        "projects_compression",
                        "projects_images_remote_cache_expiry",
                        "certificate_project",
                        "network_ovn_acl",
                        "projects_images_auto_update",
                        "projects_restricted_cluster_target",
                        "images_default_architecture",
                        "network_ovn_acl_defaults",
                        "gpu_mig",
                        "project_usage",
                        "network_bridge_acl",
                        "warnings",
                        "projects_restricted_backups_and_snapshots",
                        "clustering_join_token",
                        "clustering_description",
                        "server_trusted_proxy"
                ],
                "api_status": "stable",
                "api_version": "1.0",
                "auth": "trusted",
                "public": false,
                "auth_methods": [
                        "tls"
                ],
                "environment": {
                        "addresses": [],
                        "architectures": [
                                "x86_64",
                                "i686"
                        ],
                        "certificate": "-----BEGIN CERTIFICATE-----\nMIICBzCCAY2gAwIBAgIRAN6IvzLc/dSlfqkPAoRI3yEwCgYIKoZIzj0EAwMwNTEc\nMBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEVMBMGA1UEAwwMcm9vdEBpb3dl\ndGVhMB4XDTIxMDYyMzIxMDM0N1oXDTMxMDYyMTIxMDM0N1owNTEcMBoGA1UEChMT\nbGludXhjb250YWluZXJzLm9yZzEVMBMGA1UEAwwMcm9vdEBpb3dldGVhMHYwEAYH\nKoZIzj0CAQYFK4EEACIDYgAExmqCdTZMcGxwGJkVKWQnAbkRQB4OiZK3NgLKOj4f\n5vOwaEGc9U0m13TSkQHwyUqzopBupUOyYTwepV4TAJRtP9ay0GZtmLZpqyMnYvV6\nPgDLwEvOFHxMH2TjsBSkz4xio2EwXzAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\nCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAqBgNVHREEIzAhggdpb3dldGVhhwR/\nAAABhxAAAAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2gAMGUCMAvrzRWa4Pte\nI+OcC7OaHYTy+SVUHzfAo0goqDdmPCpPUOwj3ZWRsLFX0s0t7TZZgQIxAPfPGXc9\nlenDI0LTZHDr29Ms/IzYIHy7tXebAn2ijNKFVkDd7mlq7PNNzbeEuzigPg==\n-----END CERTIFICATE-----\n",
                        "certificate_fingerprint": "442ed510006258ef1617b37eba85d497d29a8876bd2b16bb4dd4fc3e37f7b480",
                        "driver": "lxc | qemu",
                        "driver_version": "4.0.9 | 5.2.0",
                        "firewall": "xtables",
                        "kernel": "Linux",
                        "kernel_architecture": "x86_64",
                        "kernel_features": {
                                "netnsid_getifaddrs": "false",
                                "seccomp_listener": "false",
                                "seccomp_listener_continue": "false",
                                "shiftfs": "false",
                                "uevent_injection": "true",
                                "unpriv_fscaps": "true"
                        },
                        "kernel_version": "4.19.0-17-amd64",
                        "lxc_features": {
                                "cgroup2": "true",
                                "devpts_fd": "true",
                                "idmapped_mounts_v2": "false",
                                "mount_injection_file": "true",
                                "network_gateway_device_route": "true",
                                "network_ipvlan": "true",
                                "network_l2proxy": "true",
                                "network_phys_macvlan_mtu": "true",
                                "network_veth_router": "true",
                                "pidfd": "true",
                                "seccomp_allow_deny_syntax": "true",
                                "seccomp_notify": "true",
                                "seccomp_proxy_send_notify_fd": "true"
                        },
                        "os_name": "Debian GNU/Linux",
                        "os_version": "10",
                        "project": "default",
                        "server": "lxd",
                        "server_clustered": false,
                        "server_name": "iowetea",
                        "server_pid": 3415,
                        "server_version": "4.15",
                        "storage": "zfs",
                        "storage_version": "2.0.3-8~bpo10+1"
                }
        }
Creating ha
DBUG[06-28|16:11:28] Sending request to LXD                   method=GET url=http://unix.socket/1.0/images/aliases/hassio2 etag=
DBUG[06-28|16:11:28] Got response struct from LXD
DBUG[06-28|16:11:28]
        {
                "description": "",
                "target": "7f16d4ad57ff4e4cf9f35890e70bd2d79cc044dfa98471e598edc6adc4214105",
                "name": "hassio2",
                "type": "virtual-machine"
        }
DBUG[06-28|16:11:28] Sending request to LXD                   method=GET url=http://unix.socket/1.0/images/7f16d4ad57ff4e4cf9f35890e70bd2d79cc044dfa98471e598edc6adc4214105 etag=
DBUG[06-28|16:11:28] Got response struct from LXD
DBUG[06-28|16:11:28]
        {
                "auto_update": false,
                "properties": {
                        "description": "Home Assistant image",
                        "os": "Debian",
                        "release": "buster 10.10"
                },
                "public": false,
                "expires_at": "1970-01-01T01:00:00+01:00",
                "profiles": [
                        "default"
                ],
                "aliases": [
                        {
                                "name": "hassio2",
                                "description": ""
                        }
                ],
                "architecture": "x86_64",
                "cached": false,
                "filename": "haos_ova-6.1.qcow2",
                "fingerprint": "7f16d4ad57ff4e4cf9f35890e70bd2d79cc044dfa98471e598edc6adc4214105",
                "size": 795017443,
                "type": "virtual-machine",
                "created_at": "2021-06-28T14:50:56+01:00",
                "last_used_at": "2021-06-28T16:03:03.535701416+01:00",
                "uploaded_at": "2021-06-28T15:57:39.099003966+01:00"
        }
DBUG[06-28|16:11:28] Connected to the websocket: ws://unix.socket/1.0/events
DBUG[06-28|16:11:28] Sending request to LXD                   method=POST url=http://unix.socket/1.0/instances etag=
DBUG[06-28|16:11:28]
        {
                "architecture": "",
                "config": {},
                "devices": {},
                "ephemeral": false,
                "profiles": null,
                "stateful": false,
                "description": "",
                "name": "ha",
                "source": {
                        "type": "image",
                        "certificate": "",
                        "fingerprint": "7f16d4ad57ff4e4cf9f35890e70bd2d79cc044dfa98471e598edc6adc4214105"
                },
                "instance_type": "",
                "type": "virtual-machine"
        }
DBUG[06-28|16:11:28] Got operation from LXD
DBUG[06-28|16:11:28]
        {
                "id": "a54785f5-fed6-4937-88ba-34c73906f06a",
                "class": "task",
                "description": "Creating instance",
                "created_at": "2021-06-28T16:11:28.096741107+01:00",
                "updated_at": "2021-06-28T16:11:28.096741107+01:00",
                "status": "Running",
                "status_code": 103,
                "resources": {
                        "instances": [
                                "/1.0/instances/ha"
                        ]
                },
                "metadata": null,
                "may_cancel": false,
                "err": "",
                "location": "none"
        }
DBUG[06-28|16:11:28] Sending request to LXD                   method=GET url=http://unix.socket/1.0/operations/a54785f5-fed6-4937-88ba-34c73906f06a etag=
DBUG[06-28|16:11:28] Got response struct from LXD
DBUG[06-28|16:11:28]
        {
                "id": "a54785f5-fed6-4937-88ba-34c73906f06a",
                "class": "task",
                "description": "Creating instance",
                "created_at": "2021-06-28T16:11:28.096741107+01:00",
                "updated_at": "2021-06-28T16:11:28.096741107+01:00",
                "status": "Running",
                "status_code": 103,
                "resources": {
                        "instances": [
                                "/1.0/instances/ha"
                        ]
                },
                "metadata": null,
                "may_cancel": false,
                "err": "",
                "location": "none"
        }

Sorry for the misunderstanding above, I’m guessing that ssh hung up, or a lot was going in the background without my knowledge and had my system very slow, but this worked for me to launch the vm

lxc launch hassio ha20 --vm -c security.secureboot=false

Now I can access the VM shell using lxc console ha20

Trying to exec in the shell does not work, and I think this is normal for my current situation.
lxc exec ha20 bash
Error: Failed to connect to lxd-agent

What I cannot understand now is that I cannot obtain an IP with my current macvlan profile. It works for other normal ( images:debian/10 ) containers and virtual machine.

The current macvlan profile:

root@iowetea:~# lxc profile show common
config:
  boot.autostart: "true"
  environment.TZ: Europe/Malta
description: Iowetea common profile
devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: enp2s0f0
    type: nic
  iowetea-main:
    path: /mnt/iowetea/main/
    source: /root/iowetea/main/
    type: disk
  iowetea-repos:
    path: /mnt/iowetea/repos/
    source: /root/iowetea/repos/
    type: disk
  root:
    path: /
    pool: default
    type: disk
name: common
used_by:
- /1.0/instances/net
- /1.0/instances/ha20
- /1.0/instances/mytestmachine
`

What can I do to allow LAN access from this virtual machine instance?

The IP address is obtained through the agent or through the LXD managed DHCP server. When you use macvlan and don’t have the agent, LXD has no idea what your IP address is.

Thank you @stgraber for your assistance, even for a newbie like me!

No worries, we’re here to help!

I’m quite lost with setting up the agent in my vm. Any guidance I would appreciate. I tried mount -t 9p config /mnt but I get unkown filesystem type 9p.

Most likely your kernel in the VM doesn’t support 9p.

You can try mount -t virtiofs config /mnt instead, see if you maybe have kernel support for virtiofs?

Hi @stgraber, that didn’t work out either for me. It seems Home Assistant Operating System is very restricted, not even apt is present.

Anyhow, I managed to get home assistant running on a VM. I am using a bridged network connection with the host, the machine has network and the home assistant service is running.

Last but not least, I have one last issue to complete up my riddle. I am writing a script to setup my containers and services. It seems that using lxc console won’t do it for my bash script and I cannot run lxc exec since I’m missing the agent.

Do you have any tips you can share about what I can do to execute a couple of commands inside an LXD VM without the agent, from a bash script running on the host. ( and maybe if there is any possibility of having the lxd-agent working in the VM at all! )

Hmm, short of having the agent, I suspect SSH is probably your best bet. Assuming that they at least have that :slight_smile:

You could in theory transfer all the bits needed for the agent, they should be at /var/snap/lxd/common/mntns/var/snap/lxd/common/lxd/storage-pools/default/virtual-machines/ha20/config/

You can transfer that into the VM and manually install it, but if you don’t have 9p and virtiofs, chances are you also don’t have vsock and without vsock, the agent cannot work.

I don’t think I’m going to be so lucky this time :cry:

SSH is not available without any manual intervention.

Would be great if one can pass commands through the console - but I guess I don’t know what I’m asking for here :slight_smile:

Yeah, the console is a weird beast, it’s really like if you were attached to a system over a serial port. You can write stuff to it but to do it perfectly correctly, you need to parse what’s printed back and deal with fun things like escape sequences, maximum line width, … it’s a lot harder than when you’re dealing with a clean stdin/stdout/stderr.

Yep I see, thank you for your explanation.

For this specific situation, I managed to go around it by using curl and a python websocket script for my automation script.

Unfortunately I bumped into another brick wall, I need to have a passthrough for my USB device. I’ve google around and I didn’t find a clear answer as much as I wanted. Is this possible yet for VMs?

Yeah, you can use lxc config device add NAME usb1 usb vendorid=ABC productid=DEF and that device will be attached to the VM.

Thanks @stgraber! Couldn’t lay my hands on my computer this weekend, I was looking forward to try it, and it worked!

Trying to run lxc image import /tmp/metadata.tar.gz /tmp/haos_rpi4-64-6.1.img --alias haos on a raspberry pi and I’m getting Error: Unsupported compression

Image: https://github.com/home-assistant/operating-system/releases/download/6.1/haos_rpi4-64-6.1.img.xz

Contents of metadata.yaml:

architecture: aarch64
creation_date: 1626114299
properties:
  description: Home Assistant image
  os: Home Assistant OS

uname -m:
aarch64

Operating System: Debian 10

Any idea why could this happen and how to fix this?

Oh all I had to do is to convert the raw image to qcow2

qemu-img convert -f raw -O qcow2 <raw-image.img> <converted-image.qcow2>

Wasn’t so lucky, doing an lxc console in the vm I realize that the vm did not start properly, prompt takes me straight to Shell>

That’s either because you’re missing a security.secureboot=false with an image that doesn’t have a properly signed bootloader or because your VM image isn’t compatible with UEFI (uses old legacy BIOS).

1 Like

i am also trying to solve the home assistant in lxd vm. i have been communicating on irc but it might get move visibility here for others now and in the future.

i cant seem to get the port proxy working but works fine on other containers. i have set the vm to have a static ip of the one that was initially supplied to it.

i ran lxc config device add hass httpProxy proxy listen=tcp:10.194.232.54:4333 connect=tcp:10.194.232.54:8123 nat=true to proxy the virtual machine so i can access it from outside the host but am unable to connect. from inside the vm it says its fine. i also tried listening on 127.0.0.1, lxdbr0: 10.194.232.1, and 0.0.0.0 which obviously wouldnt work, but to no avail.

here is the vm config:

$ lxc config show hass -e                                       
architecture: x86_64
config:         
  image.description: Home Assistant Image
  image.os: Debian
  image.release: bullseye 11.1
  security.secureboot: "false"
  volatile.base_image: 28b62ec10068140a97eee322922c568b33780785856e9c0c86b34f354db416dc
  volatile.eth0.host_name: tap70cafdcf
  volatile.eth0.hwaddr: 00:16:3e:36:9e:ff
  volatile.last_state.power: RUNNING
  volatile.uuid: aff69c89-5a0d-47a5-ba66-77d897e51d0c
  volatile.vsock_id: "9"
devices:
  eth0:
    ipv4.address: 10.194.232.54
    name: eth0
    network: lxdbr0
    type: nic
  httpProxy:
    connect: tcp:10.194.232.54:8123 
    listen: tcp:10.194.232.54:4333
    nat: "true"
    type: proxy
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

the ha command output while consoled into the vm:

$ ha network info
docker:                                                                                                                                       
  address: 172.30.32.0/23                                                                                                                     
  dns: 172.30.32.3
  gateway: 172.30.32.1
  interface: hassio
host_internet: true
interfaces:
- connected: true
  enabled: true
  interface: enp5s0
  ipv4:
    address:
    - 10.194.232.54/24
    gateway: 10.194.232.1
    method: auto
    nameservers:
    - 10.194.232.1
  ipv6:
    address:
    - fe80::f314:77d7:fc44:3fca/64
    gateway: null
    method: auto
    nameservers: []
  primary: true
  type: ethernet
  vlan: null
  wifi: null

strangely, if i add my hosts ssh pub key to an authorized_keys in the vm im able to connect to it via ssh root@10.194.232.54 -p 22222 but still cant connect via the mapped proxy i setup for it. im also not able to curl the website it should be publishing from inside the vm :man_shrugging:

# ha banner

| |  | |                          /\           (_)   | |            | |  
| |__| | ___  _ __ ___   ___     /  \   ___ ___ _ ___| |_ __ _ _ __ | |_ 
|  __  |/ _ \| '_ \ _ \ / _ \   / /\ \ / __/ __| / __| __/ _\ | '_ \| __|
| |  | | (_) | | | | | |  __/  / ____ \\__ \__ \ \__ \ || (_| | | | | |_ 
|_|  |_|\___/|_| |_| |_|\___| /_/    \_\___/___/_|___/\__\__,_|_| |_|\__|

Welcome to the Home Assistant command line.

System information
  IPv4 addresses for enp5s0: 10.194.232.54/24
  IPv6 addresses for enp5s0: fe80::f314:77d7:fc44:3fca/64

  OS Version:               Home Assistant OS 6.6
  Home Assistant Core:      2021.11.5

  Home Assistant URL:       http://homeassistant.local:8123
  Observer URL:             http://homeassistant.local:4357
# curl http://homeassistant.local:8123
curl: (6) Could not resolve host: homeassistant.local
# curl http://localhost:8123
# curl http://0.0.0.0:8123
# curl http://127.0.0.1:8123
# curl google.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>