Jameson "Chema" Quinn
2008-06-11 01:37:06 UTC
Somewhat off-topic, but I want to get this idea out there. Disclaimer: I am
suggesting a mechanism for extra security that would build beyond bitfrost,
when we have yet to implement the relevant part of bitfrost in the first
place. If you think it is a waste of time to look beyond the next step, no
need to read on.
What is bitfrost's P_NETWORK trying to prevent? Some examples of
network-based misbehavior, which live entirely within bitfrost's bounds
otherwise:
- A spyware version of Browse, which reported all sites visited to an
specific URL.
- A spyware version of Write, which reported all texts written to a specific
crimethink-checking server.
- A bot-net slave, which periodically called a given server, and then
followed its orders to post spam comments on bulletin boards and blogs.
(This would run against the network rate limit, but could still potentially
do damage without breaking the rate limit).
On the other hand, some examples of legitimate network uses:
- Browse - able to go to an arbitrary URL
- Email - Able to talk to a given server (and thus to cause that server to
send messages to an arbitrary IP).
- Chat - able to connect to a friend/friends *visible in the frame* and
exchange messages with them.
- Write - able to share with a friend/friends, again *from the frame*, and
exchange state update data with them.
As things stand in the bitfrost spec, there is no way to prevent any of the
illicit actions without preventing all of the legitimate ones. This is a
problem, because the Sugar ideal is to make all activities shareable - that
is, essentially comparable to Write. It does not have to be that way.
One nice thing for a high-level comunications layer like Telepathy would be
if it would support bitfrost by being (in some configuration) solidly safer
than free network access. If an activity could be authorized (through user
actions in the frame) to talk to only certain "friends" (ie, ip addresses),
it would drastically reduce the possibility that the activity would break a
user's privacy. Thus, there would be three kinds of activities:
those with full network access, able to talk to arbitrary IP addresses
(browse is inescapably in this category);
those with some kind of "telepathy-only" access, which would only let them
talk to IP addresses that correspond to a friend sharing the specific
activity instance (Chat might fit here; certainly, Write would);
and those with no network permissions.
The telepathy-only, middle security level would allow the last two "good"
use cases, while preventing the last two "bad" use cases. It could be
implemented by sugar giving them some kind of key, valid only for that
specific instance (and renewed when the instance is resumed) that they could
use to "unlock" access to a given IP. I understand that the middle security
level would not necessarily be perfect - a man-in-the-middle attack could
well subvert any gains, and, especially in early versions, it would be hard
to guarantee that any abstraction layer was 100% successful at keeping
malformed requests from getting some illicit control over a lower layer -
but it would drastically reduce the practicality of any large-scale
snoop-net or bot-net for your average shareable activity. Assuming that the
connection to friend X was compromised; an activity would still have to hope
it was started with an instance that had been shared with friend X in order
to leak any data.
Go ahead - tell me why it's a bad idea.
Your friendly neighborhood security speculator,
Jameson Quinn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.laptop.org/pipermail/security/attachments/20080610/2d8df580/attachment.htm
suggesting a mechanism for extra security that would build beyond bitfrost,
when we have yet to implement the relevant part of bitfrost in the first
place. If you think it is a waste of time to look beyond the next step, no
need to read on.
What is bitfrost's P_NETWORK trying to prevent? Some examples of
network-based misbehavior, which live entirely within bitfrost's bounds
otherwise:
- A spyware version of Browse, which reported all sites visited to an
specific URL.
- A spyware version of Write, which reported all texts written to a specific
crimethink-checking server.
- A bot-net slave, which periodically called a given server, and then
followed its orders to post spam comments on bulletin boards and blogs.
(This would run against the network rate limit, but could still potentially
do damage without breaking the rate limit).
On the other hand, some examples of legitimate network uses:
- Browse - able to go to an arbitrary URL
- Email - Able to talk to a given server (and thus to cause that server to
send messages to an arbitrary IP).
- Chat - able to connect to a friend/friends *visible in the frame* and
exchange messages with them.
- Write - able to share with a friend/friends, again *from the frame*, and
exchange state update data with them.
As things stand in the bitfrost spec, there is no way to prevent any of the
illicit actions without preventing all of the legitimate ones. This is a
problem, because the Sugar ideal is to make all activities shareable - that
is, essentially comparable to Write. It does not have to be that way.
One nice thing for a high-level comunications layer like Telepathy would be
if it would support bitfrost by being (in some configuration) solidly safer
than free network access. If an activity could be authorized (through user
actions in the frame) to talk to only certain "friends" (ie, ip addresses),
it would drastically reduce the possibility that the activity would break a
user's privacy. Thus, there would be three kinds of activities:
those with full network access, able to talk to arbitrary IP addresses
(browse is inescapably in this category);
those with some kind of "telepathy-only" access, which would only let them
talk to IP addresses that correspond to a friend sharing the specific
activity instance (Chat might fit here; certainly, Write would);
and those with no network permissions.
The telepathy-only, middle security level would allow the last two "good"
use cases, while preventing the last two "bad" use cases. It could be
implemented by sugar giving them some kind of key, valid only for that
specific instance (and renewed when the instance is resumed) that they could
use to "unlock" access to a given IP. I understand that the middle security
level would not necessarily be perfect - a man-in-the-middle attack could
well subvert any gains, and, especially in early versions, it would be hard
to guarantee that any abstraction layer was 100% successful at keeping
malformed requests from getting some illicit control over a lower layer -
but it would drastically reduce the practicality of any large-scale
snoop-net or bot-net for your average shareable activity. Assuming that the
connection to friend X was compromised; an activity would still have to hope
it was started with an instance that had been shared with friend X in order
to leak any data.
Go ahead - tell me why it's a bad idea.
Your friendly neighborhood security speculator,
Jameson Quinn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.laptop.org/pipermail/security/attachments/20080610/2d8df580/attachment.htm