Google's SPDY On Its Way To Standardization

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
The time is right to start work on a new version of HTTP?

I suggest the time was right several years ago, it's way past time now. 😀 Better late than never though, as they say. Thanks, Google, for putting strong effort into advancing internet protocols.

😉
 
They should propose something like compressing TCP packets at hardware level (Ethernet). It will reduce packets count and communication will happen a lot faster.
Basically they should make HTTP binary altogether. I'm not talking about encryption, it's a different thing altogether. CSS for bigger sites can be as big as 100kb. But in binary format It won't become bigger than a few kbs.
 
I'm curious as to how my firewall rules will need to change. Obviously the port will be easy, but with servers initiating requests? Would this still fall into a stateful connection? Of course it sounds like they would be changing the whole stateless connection setup that's in place now. I for one do not want random servers initiating communication with systems in my LAN. But since I'm not the protocol expert, I'm sure the folks at Google are already on this.
 
@zakaron : I'm curious as to how my firewall rules will need to change.

they won't. SPDY generally takes place inside SSL on port 443 with all TCP connections initiated by the browser - just like you're used to with HTTP/1. When SPDY talks about server-push that means the ability of the server to initiate the transfer of sub-resources on an already existing TCP connection without being asked. (i.e. a.html includes b.png as a subresource, so when the client asks for a.html the server also starts sending b.png so we don't have to wait for the client to actually receive the html, parse it, and send the request for b.png over the network).

 
@mcmanus: What you describe sounds kind of like multi-part MIME. One "document" can contain multiple files within it - in this case a.html and b.png.

I have questions about how much attention Google might have given to client system security when they designed SPDY. Most engineers think of secuirty as an afterthought - which is how we got into the current internet / virus / malware mess we're in. I'd like to think that Google did differntly, but I have a hard time relying on that.
 
Wouldn't this also cause problems with data useage? If you go to site a.html and the server then pushes all related content to the client, you're then using transfer data. but say you never go any deeper than a.html, you've downloaded data that you'll never digest and is therefore wasted data usage possibly pushing you past your limit eventually. yes/no?
 
@akoegle - you can only push subresources (i.e. the push is tied to an existing request) - so the potential is limited. you can't just push all the files on the website hoping to seed the client's cache. (well, you can push them but the client won't use them outside of the scope of the original resource they were associated with). The client can also rst or zero-window as soon as it starts seeing data, so potential wastage is bounded (but not 0) there as well. Biggest issue is getting pushed stuff that is already cached or interferes with other stuff wanted at a higher priority; those are important and unresolved details but they shouldn't be the headline.

@teramedia - kinda like multipart, yes. but each resource has its own stream and uri so it can be addressed/reused/cached independently, as well as rejected indpendently, downloaded at different rate, etc..
 
Status
Not open for further replies.