I’m getting Error: file does not exist
even though it does exist:
> incus file pull vm-1/etc/hosts .
Error: file does not exist
OS: Ubuntu 24.04
Client version: 6.9
Server version: 6.9
I’m getting Error: file does not exist
even though it does exist:
> incus file pull vm-1/etc/hosts .
Error: file does not exist
OS: Ubuntu 24.04
Client version: 6.9
Server version: 6.9
Thanks, I verify this and I filed an issue on Github, `incus file pull` on VMs fails with "Error: file does not exist" · Issue #1619 · lxc/incus · GitHub
I’m getting partial (short) files after running incus file pull
to get files from a container.
In a container:
root@docker:~/openresty-luajit-deb-docker# ls -l openresty-luajit-2.1.20250117-1hn1ubuntu2?.04.tar.gz
-rw-r--r-- 1 root root 3770316 Jan 31 00:41 openresty-luajit-2.1.20250117-1hn1ubuntu22.04.tar.gz
-rw-r--r-- 1 root root 3794391 Jan 31 00:42 openresty-luajit-2.1.20250117-1hn1ubuntu24.04.tar.gz
On a host:
$ incus file pull docker/root/openresty-luajit-deb-docker/openresty-luajit-2.1.20250117-1hn1ubuntu24.04.tar.gz .
$ incus file pull docker/root/openresty-luajit-deb-docker/openresty-luajit-2.1.20250117-1hn1ubuntu22.04.tar.gz .
$ ls -l openresty-luajit-2.1.20250117-1hn1ubuntu2?.04.tar.gz
-rw-r--r-- 1 hnakamur hnakamur 950272 Jan 31 09:52 openresty-luajit-2.1.20250117-1hn1ubuntu22.04.tar.gz
-rw-r--r-- 1 hnakamur hnakamur 950272 Jan 31 09:52 openresty-luajit-2.1.20250117-1hn1ubuntu24.04.tar.gz
The following error was logged in /var/log/incus/incusd.log.
time="2025-01-31T09:52:43+09:00" level=warning msg="Failed copying SFTP remote connection to instance connection" err="read unix /var/lib/incus/unix.socket->@: read: connection reset by peer" instance=docker local=/var/lib/incus/unix.socket project=default remote=@
I have found a workaround:
$ incus exec docker cat /root/openresty-luajit-deb-docker/openresty-luajit-2.1.20250117-1hn1ubuntu22.04.tar.gz > openresty-luajit-2.1.20250117-1hn1ubuntu22.04.tar.gz
$ incus exec docker cat /root/openresty-luajit-deb-docker/openresty-luajit-2.1.20250117-1hn1ubuntu24.04.tar.gz > openresty-luajit-2.1.20250117-1hn1ubuntu24.04.tar.gz
$ ls -l openresty-luajit-2.1.20250117-1hn1ubuntu2?.04.tar.gz
-rw-rw-r-- 1 hnakamur hnakamur 3770316 Jan 31 09:57 openresty-luajit-2.1.20250117-1hn1ubuntu22.04.tar.gz
-rw-r--r-- 1 hnakamur hnakamur 3794391 Jan 31 09:57 openresty-luajit-2.1.20250117-1hn1ubuntu24.04.tar.gz
OS: Ubuntu 24.04
Client version: 6.9
Server version: 6.9
There is ongoing work to change to incus file
to use the SFTP API, which may affect this. If the filesize (a big file to transfer) is also a parameter that leads to a corrupted file on the other side, then this is also an issue.
Can you try to find cases where incus file
gives an error? Your additional information is that if a file is very big, then this causes a failure. How big is too big to fail?
At the moment,
incus file pull
from a VM fails in all cases.incus file pull
from a container fails if the file is too big (how big).We fixed a bug related to this yesterday, an updated package will be available later today.
It seems that 1:6.9-ubuntu24.04-202501311200 does not fix the issue.
I still short files after running incus file pull
for files larger than 128kb.
Here is a test script to reproduce the problem.
$ cat incus-file-pull-test.sh
#!/bin/bash
incus launch images:ubuntu/24.04 filepulltest
sleep 1
incus exec filepulltest -- dd if=/dev/random of=/tmp/random-128kb.dat bs=1 count=131072
incus exec filepulltest -- openssl dgst -sha256 /tmp/random-128kb.dat
incus exec filepulltest -- dd if=/dev/random of=/tmp/random-128k+1b.dat bs=1 count=131073
incus file pull filepulltest/tmp/random-128kb.dat .
incus file pull filepulltest/tmp/random-128k+1b.dat .
ls -l random-128k{,+1}b.dat openssl dgst -sha256 random-128kb.dat
dpkg -l incus-base incus-client
Here is the output:
$ bash -x ./incus-file-pull-test.sh
+ incus launch images:ubuntu/24.04 filepulltest
Launching filepulltest
+ sleep 1
+ incus exec filepulltest -- dd if=/dev/random of=/tmp/random-128kb.dat bs=1 count=131072
131072+0 records in
131072+0 records out
131072 bytes (131 kB, 128 KiB) copied, 0.282223 s, 464 kB/s
+ incus exec filepulltest -- openssl dgst -sha256 /tmp/random-128kb.dat
SHA2-256(/tmp/random-128kb.dat)= 85592f974212766222e529b2e58de21825bf5f7212350655ceb65de7f63c13da
+ incus exec filepulltest -- dd if=/dev/random of=/tmp/random-128k+1b.dat bs=1 count=131073
131073+0 records in
131073+0 records out
131073 bytes (131 kB, 128 KiB) copied, 0.254818 s, 514 kB/s
+ incus file pull filepulltest/tmp/random-128kb.dat .
+ incus file pull filepulltest/tmp/random-128k+1b.dat .
+ ls -l random-128kb.dat random-128k+1b.dat
-rw-r--r-- 1 hnakamur hnakamur 32769 Feb 2 16:37 random-128k+1b.dat
-rw-r--r-- 1 hnakamur hnakamur 131072 Feb 2 16:37 random-128kb.dat
+ openssl dgst -sha256 random-128kb.dat
SHA2-256(random-128kb.dat)= 85592f974212766222e529b2e58de21825bf5f7212350655ceb65de7f63c13da
+ dpkg -l incus-base incus-client
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-==============================-============-============================================================
ii incus-base 1:6.9-ubuntu24.04-202501311200 amd64 Incus - Container and virtualization daemon (container-only)
ii incus-client 1:6.9-ubuntu24.04-202501311200 amd64 Incus - Command line client
Note the file size of random-128k+1b.dat becomes 32769 which is about 1/4 of the original size.
Thanks. We’ve seen a couple reports of that now, I’ll investigate shortly…
I would expect using the Incus 6.8 client with a 6.9 server would work around the issue.
The fix has been cherry-picked into the Zabbly Incus 6.9 package.
The comment for the patch says ‘Read 1M at a time’, but the code reads
_, err = io.CopyN(writer, src, 1024*1024*1024)
Which is 1GB. What am I missing?
Ah yeah, definitely meant for one less 1024
As far as I can tell copy without a size hits some odd issue on Go SFTP.
Specifying a size and handling EOF fixes that. I meant to do 1MB to avoid high temporary memory usage by forcing smaller chunks.
I’ll send a follow up PR to fix the chunk size.