Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cacher incompatible with sys-whonix (updates fail for whonix-ws and whonix-gw) #10

Open
tlaurion opened this issue Aug 19, 2022 · 28 comments

Comments

@tlaurion
Copy link
Contributor

tlaurion commented Aug 19, 2022

EDIT:

  • Working, undesired implementation cacher proxy, advertising itself as guaranteeing tor proxy even if false, is at: Cacher incompatible with sys-whonix (updates fail for whonix-ws and whonix-gw) #10 (comment)
  • The only way whonix template updates could be cached as all other templates if repo defs modigied to compoy with apt-cacher-ng requirements is if a cacher-whonix version was created, deactivating whonix-gw's tinyproxy and replacing it with apt-cacher-ng so that sys-whonix would be the update+cache proxy.

When cacher is activated, whonix-gw and whonix-ws templates cannot be updated anymore, since both whonix-gw and whonix-ws templates implement a check through systemd qubes-whonix-torified-updates-proxy-check at boot.

Also, cacher overrides whonix recipes applied at salt install from qubes installation, deployed when the user specifies that he wants all updates to be downloaded through sys-whonix.

The standard place where qubes defines, and applies policies on what to use as update proxies to be used is under /etc/qubes-rpc/policy/qubes.UpdatesProxy which contains on standard install:
Whonix still has policies deployed at 4.0 standard place:

$type:TemplateVM $default allow,target=sys-whonix
$tag:whonix-updatevm $anyvm deny

And where cacher write those at standard q4.1 place

shaker/cacher/use.sls

Lines 7 to 9 in 3f59aac

/etc/qubes/policy.d/30-user.policy:
file.prepend:
- text: "qubes.UpdatesProxy * @type:TemplateVM @default allow target=cacher"

First thing first, I think both cacher and whonix should agree on where UpdatesProxy settings should be prepended/modified, which I think historically (and per Qubes documentation as well) it should be under /etc/qubes-rpc/policy/qubes.UpdatesProxy for clarity and not adding confusion.
Whonix policies needs to be applied per q4.1 standard under Qubes. Not subject of this issue.

@unman @adrelanos @fepitre


The following applies proper tor+cacher settings:

{% if grains['os_family']|lower == 'debian' %}
{% for repo in salt['file.find']('/etc/apt/sources.list.d/', name='*list') %}
{{ repo }}_baseurl:
file.replace:
- name: {{ repo }}
- pattern: 'https://'
- repl: 'http://HTTPS///'
- flags: [ 'IGNORECASE', 'MULTILINE' ]
- backup: False

Unfortunately, whonix templates implement a sys-whonix usage check which prevents templates to use cacher.
This is documented over https://www.whonix.org/wiki/Qubes/UpdatesProxy, and is the result of qubes-whonix-torified-updates-proxy-check systemd service started at boot.

Source code of the script can be found at https://github.com/Whonix/qubes-whonix/blob/98d80c75b02c877b556a864f253437a5d57c422c/usr/lib/qubes-whonix/init/torified-updates-proxy-check

Hacking around current internals of both project, one can temporarily disable cacher to have torified-updates-proxy-check check succeed and put its success flag that subsists for the life of that booted Templatevm. We can then reactivate cacher's added UpdatesProxy bypass and restart qubesd, and validate cacher is able to deal with tor+http->cacher->tor+https on Whonix TemplatesVMs:

1- deactivate cacher override of qubes.UpdateProxy policy:

[user@dom0 ~]$ cat /etc/qubes/policy.d/30-user.policy 
#qubes.UpdatesProxy  *  @type:TemplateVM  @default  allow target=cacher

2- restart qubesd

[user@dom0 ~]$ sudo systemctl restart qubesd
[user@dom0 ~]$

3- Manually restart whonix template's torified-updates-proxy-check (here whonix-gw-16)
user@host:~$ sudo systemctl restart qubes-whonix-torified-updates-proxy-check
We see that whonix applied his state at:
https://github.com/Whonix/qubes-whonix/blob/98d80c75b02c877b556a864f253437a5d57c422c/usr/lib/qubes-whonix/init/torified-updates-proxy-check#L46

user@host:~$ ls /run/updatesproxycheck/whonix-secure-proxy-check-done 
/run/updatesproxycheck/whonix-secure-proxy-check-done

4- Manually change cacher override and restart qubesd

[user@dom0 ~]$ cat /etc/qubes/policy.d/30-user.policy 
qubes.UpdatesProxy  *  @type:TemplateVM  @default  allow target=cacher
[user@dom0 ~]$ sudo systemctl restart qubesd

5- check functionality of downloading tor+https over cacher from whonix template:

user@host:~$ sudo apt update
Hit:1 http://HTTPS///deb.qubes-os.org/r4.1/vm bullseye InRelease
Hit:2 tor+http://HTTPS///deb.debian.org/debian bullseye InRelease
Hit:3 tor+http://HTTPS///deb.debian.org/debian bullseye-updates InRelease
Hit:4 tor+http://HTTPS///deb.debian.org/debian-security bullseye-security InRelease
Hit:5 tor+http://HTTPS///deb.debian.org/debian bullseye-backports InRelease
Get:6 tor+http://HTTPS///fasttrack.debian.net/debian bullseye-fasttrack InRelease [12.9 kB]
Hit:7 tor+http://HTTPS///deb.whonix.org bullseye InRelease                                                                            
Fetched 12.9 kB in 7s (1,938 B/s)                                                                                                     
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done

Problem with this is that qubes update processes will start templates and try to apply updates unattended, and this obviously won't work unattended.

The question is then how to have whonix templates do a functional test for it to see that torrified updates are possible instead of whonix believing he is the only one providing the service? The code seems to implement curl check, but also doesn't work even if cacher is exposed on as a tinyproxy replacement, listening on 127.0.0.1:8082. Still digging, but at the end, we need to apply mitigation ( disable Whonix check) or proper functional testing from Whonix, which should see that the torrified repositories are actually accessible.


How to fix this?

Some hints:
1- cacher and whonix should modify the policy at the same place to ease troubleshooting and understanding of what is modified on the host system, even more when dom0 is concerned. I think cacher should prepend /etc/qubes-rpc/policy/qubes.UpdatesProxy
2- Whonix seems to have thought of a proxy check override:
https://github.com/Whonix/qubes-whonix/blob/685898472356930308268c1be59782fbbb7efbc3/etc/uwt.d/40_qubes.conf#L15-L21

@adrenalos: not sure this is the best option, and I haven't found where to trigger that override so that the check is bypassed?

3- At Qubes OS install, torrified updates and torrifying all network traffic (setting sys-whonix as default gateway) is two different things, the later not being enforced by default. Salt recipes are available to force updates through sys-whonix when selected at install, which dom0 still uses after cacher deployment:
29-Q41_options_selection_done

So my setup picked up sys-whonix as the default gateway for cacher since I configured my setup to use sys-whonix proxy as default, which permits tor+http/HTTPS to go through after applying manual mitigations. But that would not necessarily be the case for default deployments but would need to verify, sys-firewall being the default unless changed.

@unman: on that, I think the configure script should handle that corner case and make sure sys-whonix is the default gateway for cacher if whonix is deployed. Or your solution wants to work independently of Whonix altogether (#6) but there would be discrepancy between Qubes installation options, what most users use and what is available out of the box after installing cacher from rpm:

shaker/cacher.spec

Lines 50 to 60 in 3f59aac

%post
if [ $1 -eq 1 ]; then
echo "------------------------"
echo "cacher is being installed"
echo "------------------------"
qubesctl state.apply cacher.create
qubesctl --skip-dom0 --targets=template-cacher state.apply cacher.install
qubesctl --skip-dom0 --targets=cacher state.apply cacher.configure
qubesctl state.apply cacher.use
qubesctl --skip-dom0 --templates state.apply cacher.change_templates
fi

4- That is, cacher cannot be used as of now for dom0 updates either. Assigning dom0 updates to cacher gives the following error from cacher:
sh: /usr/lib/qubes/qubes-download-dom0-updates.sh: not found

So when using cacher + sys-whonix, sys-whonix would still be used by dom0 (where caching would not necessarily makes sense since dom0 doesn't share same fedora version then templates, but I understand that this is desired to change in the near future.

Point being here: sys-whonix would still offer its tinyproxy service, sys-whonix would still be needed to torrify whonix templates updates and dom0 would still depend on sys-firewall, not cacher, on a default install (without whonix being installed). Maybe a little bit more thoughts needs to be given to the long term approach of pushing this amazing caching proxy forward to not break things on willing testers :)

@fepitre: adding you to see if you have additional recommendations, feel free to tag Marek if you feel like so later on, but this caching proxy is a really long awaited feature (QubesOS/qubes-issues#1957) which would be a life changer if salt recipes, cloning from debian-11-minimal and sepecializing templates for different use cases, and bringing a salt store being the desired outcome from all of this.

Thank you both. Looking forward for a cacher that can be deployed as "rpm as a service" on default installations.

We are close to that, but as of now, this doesn't work, still, out of the box and seems to need a bit of collaboration from you both.

Thanks guys!

@tlaurion tlaurion changed the title Cacher incompatible with sys-whonix Cacher incompatible with sys-whonix (updates fail for whonix-ws and whonix-gw) Aug 19, 2022
@unman
Copy link
Owner

unman commented Aug 19, 2022 via email

@tlaurion
Copy link
Contributor Author

he standard place where the update proxy is defined is in /etc/qubes/policy.d/90-default.policy. I dont know why Whonix still uses the deprecated file - it could be easily changed in the updates-via-whonix state.

@unman : My bad... Seems like I have non-defaults which are artifacts of my own mess, playing on my machine with restored states :/
@adrelanos is doing the right thing, here are the relevant bits under /etc/qubes/policy.d/90-default.policy

# HTTP proxy for downloading updates
# Upgrade all TemplateVMs through sys-whonix.
#qubes.UpdatesProxy     *    @type:TemplateVM        @default    allow target=sys-whonix
# Upgrade Whonix TemplateVMs through sys-whonix.
qubes.UpdatesProxy      *   @tag:whonix-updatevm    @default    allow target=sys-whonix
# Deny Whonix TemplateVMs using UpdatesProxy of any other VM.
qubes.UpdatesProxy      *   @tag:whonix-updatevm    @anyvm      deny
# Default rule for all TemplateVMs - direct the connection to sys-net
qubes.UpdatesProxy      *   @type:TemplateVM        @default    allow target=sys-net
qubes.UpdatesProxy      *   @anyvm                  @anyvm      deny

Wiping /etc/qubes-rpc/policy/qubes.UpdatesProxy.
Should I understand that everything /etc/qubes-rpc/policy/ is legacy artifacts?
I will have to restore 4.1 root from clean install and check differences between actual rootfs and intended one, thanks for the pointer.

I understand that everything needs to be under 30-user.policy:

[user@dom0 Documents]$ sudo cat /etc/qubes/policy.d/README 
# This directory contains qrexec policy in new (Qubes R5.0) format.
# See https://www.qubes-os.org/doc/ for explanation. If you want to
# write your own policy, see the '30-user' file.

So 30-user.policy being aplied prior of 90-default.policy should have precedence, will dig that out and reply with results.

@tlaurion
Copy link
Contributor Author

@adrenalos:
Things are not consistent under Whonix and I am confused now.

user@host:~$ sudo apt update
WARNING: Execution of /usr/bin/apt prevented by /etc/uwt.d/40_qubes.conf because no torified Qubes updates proxy found.
Please make sure Whonix-Gateway (commonly called sys-whonix) is running.

- If you are using Qubes R3.2: The NetVM of this TemplateVM should be set to Whonix-Gateway (commonly called sys-whonix).

- If you are using Qubes R4 or higher: Check your _dom0_ /etc/qubes-rpc/policy/qubes.UpdatesProxy settings.

_At the very top_ of that file you should have the following:

$tag:whonix-updatevm $default allow,target=sys-whonix

To see if it is fixed, try running in Whonix TemplateVM:

sudo systemctl restart qubes-whonix-torified-updates-proxy-check

Then try to update / use apt-get again.

For more help on this subject see:
https://www.whonix.org/wiki/Qubes/UpdatesProxy

If this warning message is transient, it can be safely ignored.

So is it /etc/qubes/policy.d/90-default.policy or /etc/qubes-rpc/policy/qubes.UpdatesProxy

@tlaurion
Copy link
Contributor Author

tlaurion commented Aug 19, 2022

@unman @adrenalos:

I don't use Whonix and the the people who have used this up to now don't
use it either.

In 4.1 the canonical place for policies is /etc/qubes/policy.d - I dont
have /etc/qubes-rpc/policy/qubes.UpdatesProxy on a "standard
install".
The standard place where the update proxy is defined is in
/etc/qubes/policy.d/90-default.policy.
I dont know why Whonix still uses the deprecated file - it could be
easily changed in the updates-via-whonix state.

Just checked dom0 snapshot against a clean 4.1.0 upgraded to 4.1.1 state restored from wyng-backup, passed through qvm-block to a dispvm:
Both /etc/qubes/policy.d/90-default.policy and /etc/qubes-rpc/policy/qubes.UpdatesProxy are there on a default Qubes install with Whonix installed.

@adrenalos: I confirm that https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/blob/master/qvm/updates-via-whonix.sls is the one being deployed locally:

/srv/formulas/base/virtual-machines-formula/qvm/updates-via-whonix.sls
^C
[user@dom0 Documents]$ sudo cat /srv/formulas/base/virtual-machines-formula/qvm/updates-via-whonix.sls
# -*- coding: utf-8 -*-
# vim: set syntax=yaml ts=2 sw=2 sts=2 et :

##
# qvm.updates-via-whonix
# ===============
#
# Upgrade all TemplateVMs through sys-whonix.
# Setup UpdatesProxy to always use sys-whonix for all TemplateVMs.
#
# Execute:
#   qubesctl state.sls qvm.updates-via-whonix dom0
##


default-update-policy-whonix:
  file.prepend:
    - name: /etc/qubes-rpc/policy/qubes.UpdatesProxy
    - text:
      - $type:TemplateVM $default allow,target=sys-whonix

Still i'm confused on the absence of results for

So not really understanding as of now where QubesOS/qubes-issues#7586 (comment) 90-default.policy changes for whonix are coming from.

@tlaurion
Copy link
Contributor Author

And this is where trying to follow down the rabbit hole seems to have lost me for a bit:

[user@dom0 ~]$ sudo rpm -V qubes-core-dom0-4.1.28-1.fc32.noarch -v | grep 90-default.policy
.........  c /etc/qubes/policy.d/90-default.policy

Expecting to find that configuration file under Qubes repo leads to nowhere with a quick search....

Possible alternative solutions
1- Leave whonix alone:

  • Find and implement a differentiator under cacher.change_templates.sls to not apply the same find and replace statements for debian then for whonix-gw and whonix-ws template (not sure how) which is happening today:
    {% if grains['os_family']|lower == 'debian' %}
    {% for repo in salt['file.find']('/etc/apt/sources.list.d/', name='*list') %}
    {{ repo }}_baseurl:
    file.replace:
    - name: {{ repo }}
    - pattern: 'https://'
    - repl: 'http://HTTPS///'
    - flags: [ 'IGNORECASE', 'MULTILINE' ]
    - backup: False
  • Remove conflicting whonix file at /etc/qubes-rpc/policy/qubes.UpdatesProxy unconditionaly
  • Have 30-user.policy look like the following:
    [user@dom0 Documents]$ sudo cat /etc/qubes/policy.d/30-user.policy
qubes.UpdatesProxy  *  @tag:whonix-updatevm    @default    allow target=sys-whonix
qubes.UpdatesProxy  *  @type:TemplateVM  @default  allow target=cacher

This is what I have now. Not sure about other users use cases, but I do not specialize whonix-gw nor whonix-ws as I specialize fedora and debian templates. This is unfortunate in the long run, since debian-12 will eventually land into available repositories and would benefit as well of ngcache.

2- Have whonix torrified check revised and comply with qubes 4.1 to not compete with other update proxies
@adrenalos?


It looks as if the application of Whonix currently is incompatible with cacher
since both want to be the default UpdatesProxy.
One solution might be that if cacher detects that the Updates Proxy is
set to sys-whonix, it sets itself as UpdatesProxy with netvm of
sys-whonix regardless of what the default netvm is.
Similarly, the "update via whonix" state should set netvm for cacher if
that package is already installed, and cacher is set as the default Proxy.

If the Whonix qubes are not to have the benefit of caching then it would
be trivial to exclude them from the proxy. The question is about how to
treat the others.

Thoughts?

@tlaurion
Copy link
Contributor Author

tlaurion commented Aug 19, 2022

@adrelanos @unman :

Finally understood what is happening

Ok, learned a different part of whonix today digging down the cause of the above issue (templates refusing to use the exposed proxy).

It seems that whonix templates are shutting their local proxy access through 127.0.0.1 if when doing systemctl start qubes-whonix-torified-updates-proxy-check on boot's curl test doesn't return:
<meta name="application-name" content="tor proxy"/>

Which tinyproxy does return, and for which whonix enforces checks to know if updates are to be downloaded through tor or not, and since whonix templates are preconfigured in Qubes to actually depend on sys-whonix which is supposed to offer that service by default (this is part of /etc/qubes/policy.d/90-default.policy) at:
https://github.com/QubesOS/qubes-core-admin/blob/1e151335621d30ac55b6a666bc7c1419b22da241/qubes-rpc-policy/90-default.policy#L66

qubes.UpdatesProxy * @tag:whonix-updatevm @default allow target=sys-whonix
It fails close for protection since a tor supporting tinyproxy or qubes-cache proxy is not found. Legit.

I think it would be adviseable that cacher reports the same functionality support to not break what seems to be expected.

@unman I think whonix approach is actually right.
On cacher's side, it seems that a single small piece of html code is missing to mimic tinyproxy behavior so that it continues to do what it does and fail close the proxy access in the template if the proxy check returns no tor support.


Work needed

@unman :
1- The following userinfo.html would need to be put under /usr/lib/apt-cacher-ng/userinfo.html under cloned tamplate-cacher so that whonix templates proxy check detects a tor enabled local proxy (otherwise fails):

<!DOCTYPE html>
<html lang="en"><html>

<head>
<title>403 Filtered</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<meta name="application-name" content="tor proxy"/>
</head>

</html>

2- You should force removal of /etc/qubes-rpc/policy/qubes.UpdatesProxy when installing cacher.rpm, and add your own:
qubes.UpdatesProxy * @tag:whonix-updatevm @default allow target=sys-whonix
Would not hurt. First thing applied first and other policy collision being discarded, if you remove your own line at postun, that should be safe:

qubes.UpdatesProxy  *  @type:TemplateVM  @default  allow target=cacher
qubes.UpdatesProxy  *  @tag:whonix-updatevm    @default    allow target=sys-whonix

3- Something isn't right here at cacher package uninstall:

shaker/cacher.spec

Lines 67 to 70 in 3f59aac

%postun
if [ $1 -eq 0 ]; then
sed -i /qubes.Gpg.*target=gpg/d /etc/qubes/policy.d/30-user.policy
fi

It doesn't look like if you are removing the cacher lines for update proxy here, more like a copy paste of split-gpg:

shaker/gpg.spec

Lines 38 to 41 in 3f59aac

%preun
if [ $1 -eq 0 ]; then
sed -i /qubes.Gpg.*target=sys-gpg/d /etc/qubes/policy.d/30-user.policy
fi

Probably a bad copy paste, but I think you would like to know before someone complains that uninstalling cacher makes gpg-split defunc.

@adrenalos @MaMarek:
1- /etc/qubes-rpc/policy/qubes.UpdatesProxy should not be used anymore and replaced by /etc/qubes/policy.d/30-user.policy appending, see above comments.
qubes.UpdatesProxy * @tag:whonix-updatevm @default allow target=sys-whonix


That's it folks. The rest would work as is. And sorry for the white noise before. That was not so easy to understand :)

@adrelanos
Copy link

First thing first, I think both cacher and whonix should agree on where UpdatesProxy settings should be prepended/modified, which I think historically (and per Qubes documentation as well) it should be under /etc/qubes-rpc/policy/qubes.UpdatesProxy for clarity and not adding confusion.

@unman @adrelanos @fepitre

Prepending is mostly unspecific to Whonix and implemented by Qubes.

Qubes is currently moving away from /etc/qubes-rpc/policy folder to /etc/qubes/policy.d folder. The new folder will make things much simpler. While unspecific to Whonix, instructions for that here are related and might be handy https://www.whonix.org/wiki/Multiple_Qubes-Whonix_Templates as these show file names, contents and even a comparison if the old versus the new Qubes qrexec folder/format.

I'd suggest asking Marek for guidance which file name in /etc/qubes/policy.d folder should be used here in this case.

The question is then how to have whonix templates do a functional test for it to see that torrified updates are possible instead of whonix believing he is the only one providing the service? The code seems to implement curl check, but also doesn't work even if cacher is exposed on as a tinyproxy replacement, listening on 127.0.0.1:8082. Still digging, but at the end, we need to apply mitigation ( disable Whonix check) or proper functional testing from Whonix, which should see that the torrified repositories are actually accessible.

How to fix this?

Some hints: 1- cacher and whonix should modify the policy at the same place to ease troubleshooting and understanding of what is modified on the host system, even more when dom0 is concerned. I think cacher should prepend /etc/qubes-rpc/policy/qubes.UpdatesProxy 2- Whonix seems to have thought of a proxy check override: https://github.com/Whonix/qubes-whonix/blob/685898472356930308268c1be59782fbbb7efbc3/etc/uwt.d/40_qubes.conf#L15-L21

@adrelanos: not sure this is the best option, and

I haven't found where to trigger that override so that the check is bypassed?

I don't like the bypass idea.

To use the override for testing, torified_check=true could be set in /etc/uwt.d/25_cacher.conf. Untested but I don't see why it wouldn't work.
related: config files numbering convention

@adrelanos
Copy link

I missed the other replies before making my prior reply and didn't read all posts there yet.
Pull request / ticket for Qubes porting Qubes-Whonix to the new style /etc/qubes/policy.d folder welcome. That seems almost a prerequisite before making changes here. Qubes qrexec and salt is for the very most part maintained by Qubes.

There's a typo in my nickname btw.

@tlaurion
Copy link
Contributor Author

tlaurion commented Aug 20, 2022

@marmarek

To close this issue and to have cacher (apt-cacher-ng) work out of the box on top of a default installation with sys-whonix having been toggled to have all updates being done under sys-whonix at install or per qubesctl salt deployment, the current salt script for sys-whonix creates a prepends a new line under /etc/qubes-rpc/policy/qubes.UpdatesProxy containing:
qubes.UpdatesProxy * @type:TemplateVM @default allow target=sys-whonix

cacher needs to prepend that when deployed, so that all Templates use cacher instead of sys-whonix, and currently does so under /etc/qubes/policy.d/30-user.policy

90-default-policy.policy is already deploying the following, enforcing whonix templates (only those, as opposed to all templates in prececent line) to use sys-whonix for updates:
https://github.com/QubesOS/qubes-core-admin/blob/1e151335621d30ac55b6a666bc7c1419b22da241/qubes-rpc-policy/90-default.policy#L66
qubes.UpdatesProxy * @tag:whonix-updatevm @default allow target=sys-whonix

90-default-policy.policy also specifies in its header to not modify that file
https://github.com/QubesOS/qubes-core-admin/blob/1e151335621d30ac55b6a666bc7c1419b22da241/qubes-rpc-policy/90-default.policy#L1-L2
And tells the user to modify lowered number policy files (which would set a policy, all other ones trying to redefine the same policy will not be applied)

Considering that all policies are moved to the new directory, it seems that /etc/qubes-rpc/policy/qubes.UpdatesProxy was a forgotten artifact in salt recipe which was not modified since 2018 ( last commit was from @adrelanos : https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/blob/b75e6c21b87e552fbaee201fb12897732ed8fb45/qvm/updates-via-whonix.sls )

@marmarek @adrelanos : the question is where that line should be prepended per that salt recipe in a PR?
qubes.UpdatesProxy * @type:TemplateVM @default allow target=cacher should be prepended under /etc/qubes/policy.d/30-user.policy on new installs? What to do with the artifact? When that forgotten artifact is applied when parsed along /etc/qubes/policy.d/ ?
It seems that in any case, the old artifact will need to be removed, but the correct flow of cleaning this is over my heads.
Can one of you guys open an issue upstream and point it here? I would propose a PR, but I would not propose anything better then something that would be applied to new installs, where the old artifact would still be in the old place, and most probably interfere with the newer policy directory and the order of application.

@unman @adrelanos: this is why I suggested for cacher to just wipe that file (do the right thing) when applying cacher (which replaces the policy so that all templates use cacher, after having cacher expose what whonix expects in its check with userinfo.html in previous post)

@adrelanos said:

related: config files numbering convention
@marmarek: any policy name convention that should be followed?


@adrelanos @unman :

To use the override for testing, torified_check=true could be set in /etc/uwt.d/25_cacher.conf. Untested but I don't see why it wouldn't work.

My implementation recommendation above (exposing under apt-cacher-ng userinfo.html what whonix-proxy-check expects) is another approach that would not invade whonix templates, and would not require anything else then cleaning up the old artifact above and understand Qubes config files numbering convention, if any)

Pull request / ticket for Qubes porting Qubes-Whonix to the new style /etc/qubes/policy.d folder welcome. That seems almost a prerequisite before making changes here. Qubes qrexec and salt is for the very most part maintained by Qubes.

@unman @adrelanos : as said earlier, if cacher was applying the suggestions at #10 (comment), it would not be incompatible with upstream fixes in salt recipes to be applied in dom0 for next installation. The main down the road is if a user reapplies salt recipe to have sys-whonix to download templates updates without uninstalling cacher.rpm (which should do the right thing at uninstall). Otherwise, all templates will have repositories configured to go through apt-cacher-ng and will obviously fail. This is why you are all tagged here. This needs a bit of collaboration.

Thanks for helping this move forward!

@tlaurion
Copy link
Contributor Author

I missed the other replies before making my prior reply and didn't read all posts there yet. Pull request / ticket for Qubes porting Qubes-Whonix to the new style /etc/qubes/policy.d folder welcome. That seems almost a prerequisite before making changes here. Qubes qrexec and salt is for the very most part maintained by Qubes.

There's a typo in my nickname btw.

@adrelanos : any reason why whonix templates are not actually testing a tor url instead of relaying on the proxy itself to provide modified tinyproxy provided web page in whonix-gw template stating that sys-whonix is providing tor? The templates should probably verify the reality, that is, that tor urls can be accessed (subset of sdwdate urls?)

Otherwise, this is a dead hand and cacher already dropped whonix templates as of now, letting them use sys-whonix and not interfering in any way. This is a bit sad, since cacher is not caching all the packages downloaded per templates as of now nor in any foreseable future unless Whonix templates change their proxy test to something that tests the reality of tor being accessible, and not binding whonix templates to sys-whonix.

In the current scenario, the ideal would be to have cacher the update proxy, and have cacher use sys-whonix as its netvm. While having whonix templates failing as per current failsafe mechanisms if the templates are discovering that cacher is not using sys-whonix )or anything else torrifying all network traffic through tor).

@adrelanos
Copy link

@adrelanos : any reason why whonix templates are not actually testing a tor url instead of relaying on the proxy itself to provide modified tinyproxy provided web page in whonix-gw template stating that sys-whonix is providing tor? The templates should probably verify the reality, that is, that tor urls can be accessed (subset of sdwdate urls?)

You mean like check.torproject.org? Because it's bad to hit check.torproject.org over clearnet. Also system dependency for basic functionality on a remote server should be avoided.

@tlaurion
Copy link
Contributor Author

I see.

So basically, there is no way to:

  • have cacher defined as whonix update proxy, since whonix template will do check on expected sys-whonix tinyproxy output on connection to it, since it expects its updates to be going through sys-whonix and only sys-whonix. This is by design from whonix proxy check.
  • Since there is no way of having any other chaining then appvm -> sys-whonix -> hidden tor onion site, there is no way for whonix template to check connectivity against tor hidden onion site behind cacher. Therefore, whonix cannot check dynamically for tor hidden sites if not behind sys-whonix.
  • whonix templates if not directly behind sys-whonix, could do a check on clearnet over https to see if public IP exposed to website is an exit node. This is undesired.

Basically, the only foreseeable option to have whonix templates cached by cacher (as all other templates that can rewrite their update URLs) would be if cacher was replacing sys-whonix's tinyproxy.

@adrelanos @unman : any desire to co-create a cacher-whonix adapted salt recipes and have it packaged the same way cacher is to have all templates updates cached?

@tlaurion
Copy link
Contributor Author

Modified op.

@adrelanos
Copy link

If sys-whonix is behind cacher... I.e.

whonix-gw-16 (Template) -> cacher -> sys-whonix -> sys-firewall -> sys-net

or...

debian-11 (Template) -> cacher -> sys-whonix -> sys-firewall -> sys-net

...then cacher could check if it is behind sys-whonix and if it is, similarity to Whonix's check, cacher could provide some feedback to the Template.

How would cacher know it's behind sys-whonix? Options: ask its tinyproxy or maybe better, use qrexec.

In any case, this breaks Tor stream isolation. But that's broken in Qubes-Whonix anyhow:
https://forums.whonix.org/t/improve-apt-get-stream-isolation-for-qubes-templates/3315

For stream isolation to be functional, APT (apt-transport-tor) would have to be able to talk to a Tor SocksPort directly. This isn't possible due to Qubes UpdatesProxy implementation. I don't think any solution that supports stream isolation is realistic.

@adrelanos @unman : any desire to co-create a cacher-whonix adapted salt recipes and have it packaged the same way cacher is to have all templates updates cached?

I don't think I can create it but seems like a cool feature. Contributions welcome.

@unman
Copy link
Owner

unman commented Oct 11, 2022 via email

@tlaurion
Copy link
Contributor Author

Small update:

My past userinfo.html file was/is a hack linked to my attempt to make whonix templates able to interact with cacher.
My past attempt was a dumb replacement of that file with only the required lines per whonix, and was breaking cacher management through a browser, from an app qube using cacher as a netvm/having firefox installed into cacher-template.

I recommend leaving cacher-template alone (not installing any other software) and modifying cacher-template through dom0 call with root user since the cacher-template doesn't have sudo (which is good):
qvm-run --user root --pass-io --no-gui template-cacher "xterm"

And then adding the following lines into /usr/lib/apt-cacher-ng/userinfo.html (patch format)

--- /usr/lib/apt-cacher-ng/userinfo.html	2021-05-30 04:39:15.000000000 -0400
+++ /root/userinfo.html	2022-10-11 12:58:03.052000000 -0400
@@ -71,5 +71,7 @@
          </tr>
       </table>
    </body>
+ <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
+ <meta name="application-name" content="tor proxy"/>
 </html>

Once again, this will break the contract of whonix, guaranteeing that proxy (normally tinyproxy) is always torified, and requires users to have cacher have sys-whonix as its netvm to function properly with tor+http urls out of the box.

@adrelanos @unman: i'm currently lost on what upstream path will be taken to have cacher useable with cacher.
Reminder: currently cacher deploys itself to have whonix repository URLs not modified, and makes sure that cacher doesn't interact with whonix templates, which will continue to use sys-whonix as the update proxy, not cacher.

@unman
Copy link
Owner

unman commented Oct 12, 2022 via email

@adrelanos
Copy link

Anything that turn Whonix, the all Tor OS into a "maybe sometimes not" Tor OS is considered to be an unacceptable solution.

@unman

I'll repeat that I don't believe that this is the right approach. Confirmation that the caching proxy is connected over Tor must not rest on a static configuration setting.

Agreed. Since that would be risky (could produce leaks).

@tlaurion

@adrelanos @unman: i'm currently lost on what upstream path will be taken to have cacher useable with cacher.

As that ticket is going, long standing without progress: I would speculate it to be none.

@tlaurion

Reminder: currently cacher deploys itself to have whonix repository URLs not modified, and makes sure that cacher doesn't interact with whonix templates, which will continue to use sys-whonix as the update proxy, not cacher.

The status quo. That's imperfect but at least not insecure.

@unman
Copy link
Owner

unman commented Oct 12, 2022 via email

@adrelanos
Copy link

@unman

That cant be the right solution.

We don't want cacher to always return "tor proxy", since it's in the
users hands whether it routes through Tor or not.
In fact returning that is completely unconnected to whether routing is
via Tor. (The case is different in Whonix where that file can be
included and Tor access is supposedly guaranteed.)

Agreed.

@unman

If we do this then the rewrite option should not be applied in Whonix
templates - is there any grain e can access specific to Whonix?

I don't understand this part "is there any grain e can access specific to Whonix?".

@unman

I suggest that cacher set:
qubes.UpdatesProxy * @tag:whonix-updatevm @default allow target=sys-whonix
qubes.UpdatesProxy * @tag:whonix-updatevm @AnyVM deny
qubes.UpdatesProxy * @type:TemplateVM @default allow target=cacher

Good. Though small detail:

qubes.UpdatesProxy * @tag:whonix-updatevm @default allow target=sys-whonix
qubes.UpdatesProxy * @tag:whonix-updatevm @AnyVM deny

This should be Qubes / Whonix default anyhow.

@unman

IF the current target for @type:TemplateVM is target=sys-whonix, then
the netvm for cacher can be set to sys-whonix. That way Whonix users get
caching for all templates EXCEPT the whonix ones.

Also OK with me.

However, @tlaurion won't be too happy about it. The goal of this ticket was allowing users to cache all updates, including the Whonix Templates. That would surely be a nice feature. But from Whonix perspective, then the updates would need to be routed through Whonix-Gateway without risk of going through clearnet.

Not sure if I mentioned that earlier... For this level of functionality, cacher would have to implement a similar "am I torified" check that Whonix Templates use. But either unman didn't like this idea and/or nobody volunteering to implement it. Therefore I don't think this functionality will happen.

Otherwise, back to the drawing board. Maybe the Whonix Template "am I torified" check would need to be implemented in a nicer way. That is, because then it would look more appealing to re-implement these into cacher. Suggestions welcome.

Maybe some qrexec based implementation. A Whonix Template would ask "Does the Net Qube I am connected to directly or through multiple hops torified?" If yes, permit updates. Otherwise, fail closed.

@adrelanos
Copy link

@unman

Thanks Patrick The only options are:

  1. incorporate cacher into sys-whonix - not wanted by Whonix dev

Indeed.

  1. Make "Tor confirmation" dependent on online Tor test - e.g onion
    test I outlined. Issue here would be more leaking of information that
    Tor is used.

Not wanted by Whonix dev.

  1. Status quo - Whonix qubes do not use caching.

Agreed.

To add:

  1. Re-design of Whonix Template "am I torified" check. Would need to be contributed. Potentially the following components would require modification: dom0, Whonix, cacher.

@marmarek
Copy link

marmarek commented Oct 12, 2022

4. Re-design of Whonix Template "am I torified" check. Would need to be contributed. Potentially the following components would require modification: dom0, Whonix, cacher.

I can easily add such information into qubesdb, using https://github.com/QubesOS/qubes-core-admin-addon-whonix/blob/master/qubeswhonix/__init__.py. Specifically, some value saying "my netvm has anon-gateway tag". If you'd like me to do that, I have questions:

  • should it be about just direct netvm, or any netvm up the chain (for example vm->another-proxy-or-vpn->sys-whonix) ?
  • should it be just qubesdb entry (accessible with qubesdb-read tool), or maybe put it into qubes-service tree, so a flag file in /run/qubes-service will be automatically created? the latter is probably easier to handle, but could have weird interactions with the user manually setting/unsetting the thing using qvm-service tool

EDIT: the above is about the actual netvm, not how updates proxy is routed.

@adrelanos
Copy link

My mistake in my suggestion in my previous post. Primarily in this ticket it's about UpdatesProxy settings. (I shouldn't have said Net Qube since these are set to none by default for Qubes Templates.)

These are good questions. I am not sure yet. Here are some thoughts.


Example 1) Simple, ideal.

So let's say for example... Let's start with a simple and ideal example... whonix-ws-16 Template gets configured to use cacher as UpdatesProxy. Then cacher is configured to use sys-whonix. In summary:

  • simple: whonix-ws-16 -> cacher -> sys-whonix
  • same with more detail: whonix-ws-16 -> cacher (UpdatesProxy of whonix-ws-16) -> sys-whonix (Net Qube of cacher)

This one should work fine.


Example 2)

whonix-ws-16 -> cacher -> sys-vpn -> sys-whonix

In this example, connections to onionized repositories won't be possible. (Becuase the VPN IP is the last in the chain and won't support connections to onions.) But since this is generally not recommended a a minority use case, we can probably ignore it.


Example 3)

whonix-ws-16 -> cacher -> sys-whonix -> sys-vpn

This one should work fine too.


Thinking from a script inside of whonix-ws-16... What's useful to ask from inside the script...?

    1. What's my NetVM? None. -> Great.
    1. What's my NetVM? anon-gateway -> Ok. Show a warning since Qubes isn't supposed to be used like that but permit networking.
    1. What's my NetVM? Something else. -> Show a warning since Qubes isn't supposed to be used like and fail closed (no networking).
    1. Am I connected to UpdatesProxy with tag anon-gateway? Yes. -> Great. Permit full networking.
    1. Am I connected to UpdatesProxy with tag anon-gateway? No. -> Hm. Now what?
      1. But am I at least connected to an UpdatesProxy that itself has a Net Qubes with tag anon-gateway somewhere in its chain? Yes. -> Great. Permit full networking.
  • should it be about just direct netvm, or any netvm up the chain (for example vm->another-proxy-or-vpn->sys-whonix) ?

Now that I was thinking more about it, it's about both. UpdatesProxy and Net Qube.

Anywhere in the chain or in a specific position, I am not sure about yet.

In theory, this could be super complex. Parsing the full chain (an array) of the UpdatesProxy and/or Net Qube chain. Probably best avoided.

Perhaps something like:

  • A) new qubesdb-entry: connected to UpdatesProxy with tag $variable (where variable could be anon-gateway, cacher, none or something else) [1]
  • B) another new qubesdb-entry: connected to an UpdatesProxy that itself has somewhere in the chain a Net Qube with tag anon-gateway: yes | no [2]

[1] Might be useful to know because depending on if connected to anon-gateway or cached the APT repository files need to be modified.

[2] Used by leak shield firewall to avoid non-torified connections.

  • should it be just qubesdb entry (accessible with qubesdb-read tool),

qubesdb-read seems nice for this purpose.

or maybe put it into qubes-service tree, so a flag file in /run/qubes-service will be automatically created? the latter is probably easier to handle, but could have weird interactions with the user manually setting/unsetting the thing using qvm-service tool

Indeed.

@marmarek
Copy link

whonix-ws-16 -> cacher -> sys-vpn -> sys-whonix
But since this is generally not recommended a a minority use case, we can probably ignore it.

Fair enough.

Generally, I'd prefer UpdatesProxy (whichever implementation it is) to announce itself whether

a) it uses Tor to access updates, and
b) whether it supports onion addresses.

The "a" is necessary to avoid leaks, the "b" is nice-to-have to choose repository addresses. In practice, we approximate "b" with "a", which is okay for the current needs.
Having UpdatesProxy announcing it itself allows more flexibility (like, something that route updates via Tor, but isn't Whonix Gateway). And then, the UpdatesProxy could use the qubesdb entry I proposed, to see if it's connected to Whonix (if it isn't Whonix Gateway itself), and may set appropriate flag based on that.

Is setting the flag as a magic header in an error page an issue? It isn't the most elegant way, but it works. If we want a nicer interface, I have two ideas:

  1. Some specific URL to fetch, not just an error page (not sure if tinyproxy supports that...). In that case, we could expect specific format, not just embedding a tag into existing HTML page.
  2. Call qubes.UpdatesProxy service with an argument, like qubes.UpdatesProxy+query-features, and have it return list of features in some easy to parse format. The nice part about this approach is, you can have separate script in /etc/qubes-rpc/qubes.UpdatesProxy+query-features that implements that, so you don't need to modify any html files at startup.

The second approach IMO is nicer, but technically could be inaccurate (policy can redirect services to different targets based on the argument, if you explicitly set the argument there). That said, if user really want to bypass this check, they always can do that. All the documentation and tools we have operate on wildcard (*) argument, so there is no risk of doing that accidentally.

BTW, all of the above (and the current situation too) have TOCTOU problem. The connection is checked only at the startup, but user can later change both netvm connection and also redirect updates proxy. A qube can see when its netvm was changed, but have no way to see when the qrexec policy was changed to redirect updates proxy elsewhere (other than re-checking periodically...). That's a corner case that's probably okay to ignore, but better be aware of its existence.

@tlaurion
Copy link
Contributor Author

@unman @adrelanos : opinions on @marmarek previous implementation suggestion would make this important matter go forward.

@adrelanos
Copy link

I was waiting if there's some input from @unman.
What @marmarek said generally sounds good to me.

@marmarek:

Is setting the flag as a magic header in an error page an issue? It isn't the most elegant way, but it works.

Sounds similar to how Whonix Templates are currently testing if they're connected to torified tinyproxy.
Sounds good to me but decision is @unman's to make here I guess if this is acceptable for cacher.

If we want a nicer interface, I have two ideas:

  1. Some specific URL to fetch, not just an error page (not sure if tinyproxy supports that...). In that case, we could expect specific format, not just embedding a tag into existing HTML page.

Not sure what's the difference to above here but also sounds ok.

  1. Call qubes.UpdatesProxy service with an argument, like qubes.UpdatesProxy+query-features, and have it return list of features in some easy to parse format. The nice part about this approach is, you can have separate script in /etc/qubes-rpc/qubes.UpdatesProxy+query-features that implements that, so you don't need to modify any html files at startup.

Sounds nicer indeed.

The second approach IMO is nicer, but technically could be inaccurate (policy can redirect services to different targets based on the argument, if you explicitly set the argument there)

That said, if user really want to bypass this check, they always can do that.

Yes. And if users really want to do complicated modifications to do non-standard stuff they should have the freedom to do so. That's good for sure. The common convention in FLOSS is not to add artificial user freedom restrictions.

All the documentation and tools we have operate on wildcard (*) argument, so there is no risk of doing that accidentally.

Good.

BTW, all of the above (and the current situation too) have TOCTOU problem. The connection is checked only at the startup, but user can later change both netvm connection and also redirect updates proxy.

Whonix Templates could wrap APT and do the check before calling the real APT but indeed. Still imperfect.

A qube can see when its netvm was changed, but have no way to see when the qrexec policy was changed to redirect updates proxy elsewhere (other than re-checking periodically...). That's a corner case that's probably okay to ignore, but better be aware of its existence.

Yes.

@adrelanos
Copy link

This ticket is missing issue tracking tags.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants