October 25, 2006
General Campus Announcement
We are getting ready to make a general campus announcement talking about the project. Once this happens we will then schedule a meeting with interested parites and use the information gained from that meeting to prepare a needs assessment document.
October 12, 2006
This week we began making preparations for a virtual test lab. The team has received a number of test machines and space to set them up. We are excited to use these machines to explore some of our ideas regarding possible implementations.
During a planning session this week we decided that we might be able to produce a working solution without the need of a scheduling server daemon if we make use of atomic database transactions and a "push" paradigm whereby clients would make known their availablity through POSTs to an application running on a webserver. This is appealing because it lets us overcome several potential issues -
1) How to prevent race conditions in obtaining a working machine
2) How to update the database without storing db credentials on user machines (making them vunerable to packet sniffing)
3) How to poll machines for availablity (no longer needing to)
We will probably still have to create a client daemon / service for use in changing firewall settings or other accessibilty issues but this approach looks promising.
October 03, 2006
Updates After the First Week
Well now the Virtual Sites project is just over a week old and it seems appropriate to give some updates on what has been going on. We posted some analysis updates to the Wiki to try to explain some of the options we are looking and the respective pros and cons of each.
One resource we received this week was a powerpoint detailing the implementation of a remote access strategy currently in use at the University of Wisconsin in Stevens Point. Their system makes use of an ActiveX control and a backend database for creating a remote connection through a remote PC running Internet Explorer. In terms of efficiency it is a great system because it makes use of off-warranty machines their IT group already had and also their campus lab machines during off hours. One of the cons we are seeing with are it though is a vunerabilty to Denial of Service attacks since a lab machine is allocated to a user before that user authenticates. So, a malicioius user (even outside of that University) could create a script to request the machines and continue to request more after each login fails, thus locking up all machines permanently. One way to defend against this would be to have the request webpage behind a Cosign authentication screen (limiting requests to people who would actually have the creditials to log into the machines) and to make use of a scheduling daemon that would keep track of which IP had requested a machine. This scheduling daemon could then dynamically create and send an .rdp file to the authenticated user which would temporarily contain valid connection information. This would allow a connection without the need for an ActiveX control (and consequently Internet Explorer and even a machine running Microsoft Windows so Macs users are represented too). We spent time some looking at several of these .rdp files in hopes of automating a login process through the use of the password hash stored in each file. Unfortuantely, the hash is specific to the machine creating the .rdp file (and also the current user logged into that machine) and therefore these files aren't completely portable. The net result is that a user would have to authenticate twice, once to request a machine and once after recieiving the .rdp file with the connection information to the machine to which they are connecting. Less than elegant for sure but a good way to solve security concerns. In an ideal setting, we would also be able to only open up a port on the lab machine for the IP that requested it and only for a set period of time.
There are a lot of questions and concerns that still would have to be answered for a solution like this to work. For example, do we poll the machines to see which ones are being used or do we let them report login and logoffs to the scheduling daemon, or do we do a combination of both? We also need to make sure the scheduling daemon makes atomic updates to its machine use state variables since it would need to be multithreaded to allow for multiple simultaneous incoming machine requests and machine login/logoff update requests. If the scheduling daemon also wrote out to an external database, we could include some interesting usage information updates on the machine request webpage and even update it dynamically through the use of AJAX.
More to come soon.