[application] [ script ]
[ OS ] [Flash player]
[ Hardware ] [email client]
[ OS ]
[ HW ]
Malicious/untrusted programs
- written by user, on a multiuser system
- written by untrusted, or unknown parties
- written by (incorrectly) trusted parties
"Trusted" means you believe party is honest and competent.
- protect against yourself (make files read-only, so you don't accidentally overwrite)
Each component should only get the priviledge it needs to do its job.
Good idea, prevents many security problems.
- Confidentiality: sensitive data doesn't leak
- Integrity: only authorized parties can cause changes
- Availability: program can't crash machine/other programs
Solution #0: Disallow untrusted programs (to trust, look over source code, extensive audit)
- pro: keeps untrusted programs off
- con: machine useless (have to write all code yourself). Could make the rule, but people won't follow, and will load programs anyway. Exception: special purpose machines: microwave, car, etc.
Solution #1: ignore problem
- pro: easy
- con: vulnerable.
- often the policy that is followed
- MS word macros: ran when file opened. All of us have done this: download program, run it (don't audit it or run virus checker). We just ignore possibility of problem.
Solution #2: Static code scanning (virus checker). Look at code, not running program.
- pro: good at spotting known-to-be-dangerous code
- con: not good at spotting new malicious code
- Halting problem, don't know what code will do
- can try to put in heuristic to check new code, but uphill battle: easy to circumvent
- pro: forces bad guy to create new malicious code, many people won't take time
Solution #3: rigid limits on what untrusted code can do
- example: no access to files, no access to microphones/speakers
- will come back to this
Solution #4: perimeter defense
- case-by-case judgements: ask the user (Browser asks if you want to do thing)
- more flexible than solution #0
Solution #5: flexible defense at run-time
- carefully constructed, detailed limits on runtime behavior
- structured way of making exceptions ("can't open file until we say OK")
These are policy options
- policy: what the rules are
- enforcement/mechanism: how we make sure the rules are followed (rules without enforcement don't work)
Done at 2 levels
- Memory safety (program "in a box")
- Program can't read/write memory, except its own
- Program can't jump, except to its own code (can jump into OS code, after check has been done)
- can interact with other programs, but only through approved interfaces
write(){
check_permission();
setup(); <= program could jump here, bypassing permission check
writefile();
}
2. High-level security
- design of official interfaces (design bugs vs implementation bug; in practice, gets muddled)
- code behind interfaces
- bookkeeping (keep track of which program can do what, how many resources it has, etc.)
Providing memory safety (later lecture on this)
- OS-style (hardware protection) vs language-style (software-only approach)
- program runs in interpreter or Virtual Machine (VM). Checks every read/write and jump
Give info to SW, make sure doesn't leak.
"Confinetement problem" => confine program so can't communicate to others
- covert channel: tricky way of leaking data
- like signals in bridge: cough, how place cards, etc. Same way for tricky programs.
- storage channels: leak info via filesystem/storage
- logfile: when error generated, log gets a line. Can transmit info this way.
- Program does something to cause OS to change state of filesystem
- other program observes this, even w/o reading files
- could fill up filesystem (also, the time of fill up)
- fill then empty: like a blinking light
- timing channel: program effects performance of other
- one program can fill up cache w/ garbage as a signal. When CPU switches task, other program runs slow.
- problem: caching necessary to get performance... can't turn it off
- nearly impossible to stop in practice
- administrative channels
- billing, accounting, etc. Get charged for CPU usage, program can modify this
- log in with wrong username/pass 3-times, send msg to use (1 bit)
When is leaking 1 bit/second useful? Keypress sniffer... leak password. Leaker can use error-correcting codes if noisy channel, with encryption -> so looks like random data.
bottom line: covert channels pretty impossible to stop, just slow them down.
Design advice: ASSUME covert channels exist, make sure hard for them to leak.
"Thinking outside the box": easy to come up with new attacks (not just read/writes). Privacy protection is hard: once someone has info, if they want it to leak, it will.
[untrusted app]
|
| (function call)
V untrusted
- - - - - - - - - - - - - - - - - -
| trusted
V
[URLopen]
/ \
/ \
V V
[fileOpen] [httpGet]
Assume file access prohibited, but http access OK.
Approach 1: treat URLopen as untrusted
- allow untrusted calls to httpGet
- forbid untrusted calls to fileOpen
- problem: URLopen has local cache in filesystem. Even for http URL, local cache access generates error
One solution: fileOpen checks if getting cached file. (Not great, fileOpen learning details of URLopen).
- Alternate: URL open checks if accessing cached file
- Tough to enforce: hard to write secure systems software
Turns out the simple policy ("Can only access remote files) was hard to implement; exception was the cache.
- PROBLEM: policy didn't specifically say cache was ok
- weird corner cases... you get burned. Policy not precise enough.
- worst: URLopen made by 1 company, fileOpen by another. Now tough to get problem fixed.
Problem: component-based software or "What is a program?"
- In practice: have interacting software components written by different people
- separate reusble components + glue to assemble into thing with a user interface