In this post i will try to introduce Linux sandboxes and why they are good for web browsers. If you already know the theory and you would like to see how I’ve implemented it, just skip this post and jump to the second part
Web browsers today are everywhere, and they are a huge pile of
shit code, full of shiny things that hide sometimes bad surprises, but, despite this fact, you want to use it daily cause of too many things today depend on you to visit a web site often requiring you latest web technologies.
Even if many vendor today take browser security seriously, the fast evolution of web standards make very hard to care about that on such big projects, and almost everyday in the wild appear a new method to fuck poor users using the web as a vector of evil code, using both browser vulnerability or user
There is no 100% security, if anyone tell you he has the panacea of all evil things and can show you how to be 100% protected online, it’s a liar, no exception. Despite that, something can be done to be at least a little bit more secure and block the most common attack vectors, with a cost in terms of usability that is really cheap.
The segregation and isolation concept:
The (partial) solution presented here consist in apply the segregation idea to the browsers (yes, browserS, with the final s, cause we need more than 1 browser session). Clearly enough, we can’t think to open a single browser for every web site, this can be in part done by the vendor implementing, like is done in chromium, chrome and others, per tab process and per process sandboxes.
The issue with the browser sandboxes for how they are implemented is that they assign almost the same trustiness to all web site you visit, and they aren’t fully isolated from the rest of your environment.
Expanding the concept of isolation and segregation of processes, you can think to a system where every single program is well isolated from the others, and they are categorized based on the trustiness you put on them giving them different access level to the system resources.
A well know, even if not the only one, implementation of this concept is QubeOS, that use different virtual machines to segregate things.
Anyway, isolating things doesn’t come for free, there are some drawbacks: to achieve a decent user experience and to permit things to fully working we need to permit communication between different components of the system and to present results to the user in a simple, accessible and coherent interface. To do that, we need to find the best balance between isolation and communication, two concepts that are at the antipodes.
Different browsers for different web sites trustiness level
There are web sites and web sites.
There are web sites you own, running on your own server and maybe coded by you. Maybe they are also in your local LAN, and you trust yourself as a coder and/or sysadmin. So, you trust your sites too.
There are web sites of well know companies you trust not to do too evil things, even if you know they have third party ADV banners you block them.
There are web sites of the dark web, or from unknown little hackers groups, or maybe you just want to retrieve your privacy, or maybe you also visit some know evil site from time to time cause your interest is in to know how they attack users.
Some web sites you trust or you need to use needs to access to your webcam and mic, others doesn’t need that. Some sites you like to have full access to your webcam, for other you don’t want to risk they can access it.
Not all sites deserve the same trustiness by you, and not all sites needs the same level of access to your resources.
Personally i have 3 level of access i grant for web:
- trusted: web sites i trust not to be evil or my own web sites. Web sites i choose to use for webrtc, to run freely plugins or that can access to mic.
- secure: web sites i normally think they are safe, but i don’t really know and don’t fully trust.
- paranoid: web sites i know to be evil, or just i think probably can be malicious, or just i want to be reasonabily sure it’s hard for them to trace me and my identity.
Of course it’s up to the user to choose how many level of access grant to the browsers, how to balance convenience in usage and paranoid, and to avoid open an evil web site in the trusted level.
The basic goal I have is to make those 3 level of access enforced in the more solid and secure way i can, giving to the browser only access to the resources they really need and nothing more, and more, the ability to launch more than 1 session in the same level of trustiness and maintaining some isolation between the two browser sessions in the same level.
The linux way: namespaces, seccomp-bpf, cgroups and common uid/gid
Back to browsers, we can think about using the same approach of QubeOS to use virtual machines, but they are expansive in terms of storage and ram, not exactly the best idea on a workstation or a laptop.
The Linux kernel, anyway, come in help with a lot of good features to help us avoid the overhead of a virtual machine.
First, as the Linux kernel is a multiuser OS inspired by UNIX, it features common UNIX permissions and different user/groups. So, the most oblivious thing to do is to run different browsers as different UID/GID. This approach alone is effective to avoid one level of access to access other levels files, or files in your home you use for all except browsing, and then add some security to the schema, but it’s fairly simplistic and very limited, and offer only trivial protection from any very important issue. Anyway, we will use *also* this opportunity, in Italy we say “it’s like pork, you do not throw anything away”.
Then, we come to some more effective protection. seccomb-bpf, short for “secure computing mode with berkeley packet filter rules”, a sandboxing mechanism on the linux kernel introduced in version 2.6.12 released on 2005 that limit by filters the syscalls a process can make on already open file descriptors, sending SIGKILL in case of abuse.
Starting from linux 2.6.23 another usefull feature is introduced: namespaces. From that early implementation many things have been added, and as of kernel version 3.8 they were pretty mature, and we will fully take advantages of namespace in our try to get a more secure way to browse the web.
Thanks to namespace, we can achieve isolated file system mounts, a sort of chroot but better, isolate network stack (and then differential firewall rules and routing), isolated hostname, isolated IPC, isolate PID.
There is also another namespace available in Linux kernel, the user namespaces, that permit “subuser” and “subgroup”. In other words, as namespaces are available also for unprivileges users, you can map some UIDs as they appear differently “inside” the sandbox and “outside” in the system. As this feature is amazing and very usefull for containers, it doesn’t give us a real advantage, as it doesn’t create fully indipendent UID inside the sandbox, but it just isolate them from the outside creating a 1:1 map, so, it uses real unprivileged UID even outside. As we will use real different unprivileges UID outside, and we don’t need privileged one inside the sandbox, there is no advantage in use them in our case, except if you want to make a lot of more work to have a fully multiuser environment where different humans will use your machine, but in this case your machine can’t be really secure, then, we assume it’s a single user one.
Those features aren’t the way to fix everything and all issues, nor they are the sole features that can help us in our goals, but for this implementation i will concentrate and focus on them, and we will thread other possibilities only as a side note, or maybe in future i will write something about them too.
Next, how i realized it on my workstation
Leave a Reply
You must be logged in to post a comment.