A Standard for Greatly Reducing HTTP Connections

  • joebert
  • Fart Bubbles
  • Genius
  • User avatar
  • Posts: 13502
  • Loc: Florida

Post 3+ Months Ago

I propose that browsers should be able to take a zipfile with a hash (#hash) on the end of the filename, retrieve the corosponding file from the zipfile, and use the file as if it were the type indicated by the filename hash in the context of the element.

The zipfile should be able to include subdirectories so that the zipfiles can be organized.

I might have a zipfile named "ui.zip" that included a handfull of images that would be used for buttons, header backgrounds, and other elements in a webpage UI.

I might use an <img> element, or a CSS background-image to utilize the resource.

Code: [ Select ]
<img src="./ui.zip#logo.png" alt="logo"/>

Code: [ Select ]
.my-button {background-image:url('./ui.zip#buttons/my-button.png');}


I might use a <script> or <link> element to utilize the resource.

Code: [ Select ]
<script type="text/javascript" src="ui.zip#scripts/eye-candy.js"></script>

Code: [ Select ]
<link href="./ui.zip#css/style.css" rel="stylesheet" type="text/css" media="screen, projection">


This could greatly reduce the number of HTTP connections a browser would make to the server.

Because of the compression, it could be a more effective alternative for CSS sprite image sheets.

Because of the way browsers should retrieve resources from the zipfile as if they were the file themselves, existing sites considering CSS sprites to reduce the number of HTTP connections being made to their servers could instead implement this alternative without needing to redesign the sites style to use complicated background-position CSS rules. A savy administrator could put to work creative use of sed to replace the image URLs in the existing files.

It would utilize an already widespread format that nearly everyone on the Internet is able to use right now.

I invite everyone to point out any Advantages, Drawbacks, or Additions they can think of about such a standard.
My goal is to have this evolve into an open standard that browser makers would seriously consider implementing.
  • Anonymous
  • Bot
  • No Avatar
  • Posts: ?
  • Loc: Ozzuland
  • Status: Online

Post 3+ Months Ago

  • UPSGuy
  • Lurker ಠ_ಠ
  • Web Master
  • User avatar
  • Posts: 2733
  • Loc: Nashville, TN

Post 3+ Months Ago

I like the idea, but what are your feelings on a standardized file type? The *nix crowd may reject the notion of using the zip format, even though Win and Mac have included it for some time.

There's also the possibility of creating a new file format specific to optimizing this particular use (save the issues of variable configurations during file creation?)
  • spork
  • Brewmaster
  • Silver Member
  • User avatar
  • Posts: 6252
  • Loc: Seattle, WA

Post 3+ Months Ago

.nra - Network Resource Archive

But then, why limit it to one file and/or compression type? How about a tag that lets you specify the type and file:

Code: [ Select ]
<head>
<nra id="images" type="archive/nra" href="img/images.nra" />
<nra id="photos" type="archive/zip" href="img/photos.zip" />
...
</head>
  1. <head>
  2. <nra id="images" type="archive/nra" href="img/images.nra" />
  3. <nra id="photos" type="archive/zip" href="img/photos.zip" />
  4. ...
  5. </head>


And then reference it by ID everywhere else:

Code: [ Select ]
<img nra="images" src="img/logo.png" alt="logo"/>

Code: [ Select ]
<img nra="photos" src="img/dog.jpg" alt="my dog"/>


This would also allow a webmaster to place the individual images on the server as well as the archive, so that if the browser doesn't support the archive functionality, or if the image itself is missing from the archive, the browser can fall back to looking for just the filename itself.
  • UPSGuy
  • Lurker ಠ_ಠ
  • Web Master
  • User avatar
  • Posts: 2733
  • Loc: Nashville, TN

Post 3+ Months Ago

Sounds good, where would you place the best practice - one archive per content type? Would you want the same archive to hold your js/css/imgs/etc?
  • spork
  • Brewmaster
  • Silver Member
  • User avatar
  • Posts: 6252
  • Loc: Seattle, WA

Post 3+ Months Ago

I think it depends how you plan to optimize your content. If you have a relatively small number of resources, then placing them all in the same archive would cut down on the number of HTTP connections pretty dramatically. However, if there are resources that are only used on certain pages, then it wouldn't make sense to include them in the archive for every page, even if the archive is cached, since it will lengthen the download time for other, more important resources such as display elements.
  • joebert
  • Fart Bubbles
  • Genius
  • User avatar
  • Posts: 13502
  • Loc: Florida

Post 3+ Months Ago

Well, many modern browsers already support gzip compression of pages, what if the standard allowed this to work with "*.tar.gz" or even "*.bz2" files as well ?

I don't really like the idea of using a totally new file format for this because it's essentually just defining a method for browsers to access individual files within a package within the URL without a querystring.

I believe Mozilla already has zipfile support for Firefox extensions and for gzip compression, so for them to implement this would only require changes to the way Firefox works with the URL of an image/etc. Rather than requiring them to implement a whole new format.

Internet Explorer, Opera, Flock, Safa... errr, what kind of compressed file support does Safari/Mac have ?
  • spork
  • Brewmaster
  • Silver Member
  • User avatar
  • Posts: 6252
  • Loc: Seattle, WA

Post 3+ Months Ago

Come to think of it, a new HTML tag is probably overkill. Why not just use the <link> tag with a different rel attribute, like this:

Code: [ Select ]
<link id="images" rel="nra" type="archive/nra" href="img/images.nra" />
  • joebert
  • Fart Bubbles
  • Genius
  • User avatar
  • Posts: 13502
  • Loc: Florida

Post 3+ Months Ago

spork wrote:
However, if there are resources that are only used on certain pages, then it wouldn't make sense to include them in the archive for every page, even if the archive is cached, since it will lengthen the download time for other, more important resources such as display elements.


Files common to all pages might go into one zipfile, and in-page elements might go into other zipfiles.

A browser may use one connection to keep retrieving text content, while it used another connection to retrieve the zipfile referenced by a stylesheet, with the stylesheet in turn referencing the zipfile that contained it instead of being allowed to make a dozen new connections back to the server to fetch 12 images.

This reduced load on the server will allow it to serve these zipfiles at faster rates of speed.
  • joebert
  • Fart Bubbles
  • Genius
  • User avatar
  • Posts: 13502
  • Loc: Florida

Post 3+ Months Ago

spork wrote:
Come to think of it, a new HTML tag is probably overkill. Why not just use the <link> tag with a different rel attribute, like this:

Code: [ Select ]
<link id="images" rel="nra" type="archive/nra" href="img/images.nra" />


That's a good idea.
It would essentually preload a package full of images and allow elements to reference them as usual right ?

A browser would be required to wait for archives to load before it was able to tell whether it needs to make a request to the server for an element.

Because <link> elements are <head> content, you would be forced to load archives before any content, or after all content has loaded if something such as a "defer" attribute were available. You couldn't for instance, have a <script> archive before the </body>
  • spork
  • Brewmaster
  • Silver Member
  • User avatar
  • Posts: 6252
  • Loc: Seattle, WA

Post 3+ Months Ago

joebert wrote:
That's a good idea.
It would essentually preload a package full of images and allow elements to reference them as usual right ?

Exactly. The benefit here is that the elements would first attempt to load the resource from the archive specified, but if that's unsuccessful for any reason (no browser support, file not in archive, etc), the element simply falls back to loading the resource as if no NRA were associated it, essentially ignoring the nra="xxx" attribute.

joebert wrote:
A browser would be required to wait for archives to load before it was able to tell whether it needs to make a request to the server for an element.

Because <link> elements are <head> content, you would be forced to load archives before any content, or after all content has loaded if something such as a "defer" attribute were available. You couldn't for instance, have a <script> archive before the </body>

I suppose you have a point, although I like the idea of being able to defer the loading of the archive until the first time one of the resources is requested from it.
  • joebert
  • Fart Bubbles
  • Genius
  • User avatar
  • Posts: 13502
  • Loc: Florida

Post 3+ Months Ago

Quote:
but if that's unsuccessful for any reason (no browser support, file not in archive, etc), the element simply falls back to loading the resource as if no NRA were associated it


That introduces a big problem with my proposal right there.
No way to work around missing support comes to mind. A lack of support would leave a page completely broken. :|
  • UPSGuy
  • Lurker ಠ_ಠ
  • Web Master
  • User avatar
  • Posts: 2733
  • Loc: Nashville, TN

Post 3+ Months Ago

How about having the rel tag point not to a file, but to a folder where all the full files are located, plus the archive for them reflecting the same directory structure? If the archive is supported, do it, if a needed file is corrupted, or no support is found at all, use the full files from the same directory.
  • joebert
  • Fart Bubbles
  • Genius
  • User avatar
  • Posts: 13502
  • Loc: Florida

Post 3+ Months Ago

Unless, (I always hit submit too soon...) it was switched around so that the filename were first and the archive were specified in the hash!

Browsers with no support would just ignore the hash I believe, but browsers with support would know to load and check the archive for the file specified before the hash.

This could be implemented as "value added" functionality to the <link> element proposal. It could serve as a hint to let a browser know it's going to load the resource from the archive, or as a way to specify special-case archives inline.
  • spork
  • Brewmaster
  • Silver Member
  • User avatar
  • Posts: 6252
  • Loc: Seattle, WA

Post 3+ Months Ago

joebert wrote:
That introduces a big problem with my proposal right there.
No way to work around missing support comes to mind. A lack of support would leave a page completely broken. :|

Actually, quite the opposite. Imagine that you declare your NRA with a <link> tag, like above:
Code: [ Select ]
<link id="images" rel="nra" type="archive/nra" href="img/images.nra" />

Browsers that don't support this functionality will simply ignore this link tag. So far, so good.

Now suppose we declare an image like this:
Code: [ Select ]
<img nra="images" src="img/logo.png" alt="logo" />


If the browser supports the NRA functionality, it will look in the archive images.nra for the file img/logo.png. If found, that file will be used. If the file is not found, or if the browser doesn't support NRA's and has ignored the archive declaration, then the nra="images" attribute is also ignored, and the browser loads the image from img/logo.jpg, a regular image located in the current directory.
  • spork
  • Brewmaster
  • Silver Member
  • User avatar
  • Posts: 6252
  • Loc: Seattle, WA

Post 3+ Months Ago

The only thing I don't like about using the hash notation is that it seems a bit hackish. Defining an actual structure for this added functionality allows for expansion/modification later on.
  • Bigwebmaster
  • Site Admin
  • Site Admin
  • User avatar
  • Posts: 9089
  • Loc: Seattle, WA & Phoenix, AZ

Post 3+ Months Ago

This entire proposal I find interesting because I really do think it would decrease load on servers a great deal as well as speeding up the time it would take for pages to load on the client's end. The majority of the load on the ozzu server is HTTP connections.

I believe normally in the past most browsers have the default set to two for max concurrent connections (HTTP keep-alive connections or persistent connections) per RFC2616. This is actually a good thing for servers as it does help to spread out the load:

Quote:
Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy.


http://www.w3.org/Protocols/rfc2616/rfc ... l#sec8.1.4

However, I think I have heard with FF3 it is increased to 6, I would have to double check if they are doing this and if in fact they are not following the RFC2616 recommendation (or if that recommendation has recently changed). So if a webpage you were visiting had 1000 objects to load (from css, images, javascript, etc), it could take some time depending on the server load and the clients computer processing and transfer speed, and whether or not anything is stored in cache.

The first thing I thought of when seeing this thread was how this is similiar to how game programmers package everything up into a few files. For instance a particular game I have played from Westwood (now EA) packages up all of the game content into files called .mix files. It allows your to load one file versus thousands of files separately. In a way its kind of the same concept, except here you would be reducing HTTP connections and also finding a way to get the most out of the RFC2616 recommended persistant connection limit. By the way incase you were interested in changing this limit with IE (not sure how you would do it with FF), I made a thread a long time ago on how to do it:

mswindows-forum/increase-browser-default-persistant-connection-t490.html

Per the RFCs though you probably shouldn't do it for respect of people running servers. I have tried it in the past and have noticed a huge increase in the speed that webpages load. That is one reason why I think Joebert's proposal could really improve load times on the client's end for websites, as well as decreasing load on the server end. If something like this was available, I would most likely implement it.
  • spork
  • Brewmaster
  • Silver Member
  • User avatar
  • Posts: 6252
  • Loc: Seattle, WA

Post 3+ Months Ago

Bigwebmaster wrote:
The first thing I thought of when seeing this thread was how this is similiar to how game programmers package everything up into a few files. For instance a particular game I have played from Westwood (now EA) packages up all of the game content into files called .mix files.

*sigh*... I miss the days of Command & Conquer.
  • joebert
  • Fart Bubbles
  • Genius
  • User avatar
  • Posts: 13502
  • Loc: Florida

Post 3+ Months Ago

Quote:
Actually, quite the opposite.


Sorry spork, I was trying to point out that this would be a problem if it were implemented the way I first suggested. We're on the same page now. :)

spork wrote:
The only thing I don't like about using the hash notation is that it seems a bit hackish. Defining an actual structure for this added functionality allows for expansion/modification later on.


Now if it was able to be implemented with the zipfile name first, and the resource after the hash, I would have to disagree with it being hackish. Browsers already use the hash to denote specific places, or resources, within a webpage, so a hash pointing to a filename in a zipfile would seem natural.
However since the only way to implement it without breaking pages in browsers that don't support it being to place the archive name after the hash, it would be counter-intuitive to use the hash.

I really am leaning toward the proposed <link> element for an archive, however I disagree with using a new file format because of how readily available and easy to transition to, the existing zip/gz/bz2 formats would be.
I think the mime-type should remain application/x-gzip|application/zip|etc, the extension remain zip|gz|bz2|etc, and a rel attribute would be the only thing telling the browser what to do with the archive.

Code: [ Select ]
<link href="./ui.zip" rel="nra" type="application/zip"/>



Bigwebmaster wrote:
However, I think I have heard with FF3 it is increased to 6, I would have to double check if they are doing this and if in fact they are not following the RFC2616 recommendation (or if that recommendation has recently changed). So if a webpage you were visiting had 1000 objects to load (from css, images, javascript, etc), it could take some time depending on the server load and the clients computer processing and transfer speed, and whether or not anything is stored in cache.


I've read a few things over the years that actually encourage people to increase the number of connections via Firefoxes about:config or whatever it is.

I'm certain the default in Opera is eight connections.
  • Bogey
  • Genius
  • Genius
  • Bogey
  • Posts: 8388
  • Loc: USA

Post 3+ Months Ago

I wish this would be so... it would be really awesome! Or is it so? lol
  • spork
  • Brewmaster
  • Silver Member
  • User avatar
  • Posts: 6252
  • Loc: Seattle, WA

Post 3+ Months Ago

joebert wrote:
I really am leaning toward the proposed <link> element for an archive, however I disagree with using a new file format because of how readily available and easy to transition to, the existing zip/gz/bz2 formats would be.
I think the mime-type should remain application/x-gzip|application/zip|etc, the extension remain zip|gz|bz2|etc, and a rel attribute would be the only thing telling the browser what to do with the archive.

Code: [ Select ]
<link href="./ui.zip" rel="nra" type="application/zip"/>

I completely agree that existing compression standards should be supported. I guess I was more or less using .nra as a sort of abstract idea to represent any kind of compressed file format.
  • Bozebo
  • Expert
  • Expert
  • User avatar
  • Posts: 709
  • Loc: 404

Post 3+ Months Ago

sounds interesting. if it existed then it would exist in every browser but IE from day one and appear in IE about 10 years late when something new replaces it...

Though, with images I already use one image with multiple areas on it, and have it shifted by pixels to reduce http requests - similar to the common mouse-over image change technique
  • joebert
  • Fart Bubbles
  • Genius
  • User avatar
  • Posts: 13502
  • Loc: Florida

Post 3+ Months Ago

Quote:
I completely agree that existing compression standards should be supported. I guess I was more or less using .nra as a sort of abstract idea to represent any kind of compressed file format.


I see.


Quote:
Though, with images I already use one image with multiple areas on it, and have it shifted by pixels to reduce http requests - similar to the common mouse-over image change technique


Well, the thing that got me going with this in the first place, is that I'm currently working on revamping a forum theme and the way it's currently setup I either have to rewrite certain parts of the application itself to support CSS sprites, live with numerous HTTP connections, or get rid of some eye candy.

Basically, all of my options suck at the moment.
  • joebert
  • Fart Bubbles
  • Genius
  • User avatar
  • Posts: 13502
  • Loc: Florida

Post 3+ Months Ago

I'm thinking that with a <link> element and a package, the directory structure of the compressed file would need to match that found in the elements using the resources.

This would be easy to do in most cases I think. Since the directory structure already exists on the server, it would just need to be packaged from the DocumentRoot of the site and the package would need to be loaded from the DocumentRoot.

I'm not sure whether a standard should require packages to be in the DocumentRoot though. And if packages are allowed to be loaded from any location on the server, for instance

Code: [ Select ]
<link href="/resources/ui.zip" rel="nra" type="application/zip"/>


Should the browser look for the items in that package assuming the package starts at DocumentRoot, or should the paths be relative to where the package was loaded.

For instance, if the above package was loaded and contained "/buttons/button.png", should the path available to further elements in the page be "/buttons/button.png" or "/resources/buttons/button.png" ?

I suppose there's the option of adding support for a <meta> element to decide this, working in a similar fashion to a <base> element. Perhaps it could be called "nra-base".

Code: [ Select ]
<meta http-equiv="nra-base" value="/resources/"/>
  • spork
  • Brewmaster
  • Silver Member
  • User avatar
  • Posts: 6252
  • Loc: Seattle, WA

Post 3+ Months Ago

In my opinion the path specified in a particular element should be the direct path to the element within the archive, thus:

Code: [ Select ]
<link name="resources" href="/resources/ui.zip" rel="nra" type="application/zip"/>


would establish the location of the archive itself, and all references to files within that archive should be relative, with the root of the archive acting as the root of the resource itself:

Code: [ Select ]
<img nra="resources" src="images/buttons/button.png" alt="Home"/>
  • effim
  • Beginner
  • Beginner
  • User avatar
  • Posts: 35
  • Loc: Austin, TX

Post 3+ Months Ago

While the proposal is interesting and could be the start of solving the problem you present, I don't really think the problem being presented is in fact a problem currently or will become one in the future. Let me explain.

First, you're saying you're interested in reducing connections, not requests. Assuming that a server is using keep-alive and the browser supports it, hundreds of files can be served within a single connection through separate requests.

If you are getting at reducing requests, then obviously archiving resources into a single file or otherwise sending a multi-part response (like in email attachments) would be a way of doing that. Again, though, I don't really see any problem with the way it's currently done, provided that the server is configured properly, which brings me to the next thing...

Handling of an HTTP request for a static resource should be inexpensive, provided that the connection has adequate bandwidth and the file isn't large. A small server can determine the file requested, determine the best method for serving it, and serve it, all within a few milliseconds and without using much memory. Even for thousands of concurrent requests on thousands of unique static files, a suitably equipped system can cache the files in memory and incur no performance penalty on reading files from the disk.

Bigwebmaster wrote:
This entire proposal I find interesting because I really do think it would decrease load on servers a great deal as well as speeding up the time it would take for pages to load on the client's end. The majority of the load on the ozzu server is HTTP connections.


This is where things become problematic. Rather than serving requests using a server properly configured to serve static files, you're using a server configured as a one-size-fits-all solution that is likely loaded up with several mods for authorization, caching, ssl, and PHP. Due to the architecture of Apache, those mods incur a memory hit even when they aren't used (so each Apache thread has PHP capabilities, even for static resource requests).

The solution to this problem is simply to use several instances of the same server (Apache, for example) configured to handle different file types. I'm not as much of an Apache guy as I am a Lighttpd guy, though, so I can't tell you how to do it (though I know Dreamhost does). Alternately, you can use Apache to continue serving dynamic requests as it currently does, and then use a lightweight server light Lighttpd configured to serve static files.

Photoshop is overkill for resizing JPG images. Apache configured for dynamic content is overkill for serving 3KB CSS files.

----

Just for kicks: Assuming we implemented some sort of file packaging architecture within HTTP, what happens with caching when a single file out of a package of 100 changes? While you might reduce the overall number of HTTP requests, you could very well increase the amount of bandwidth being served unless the server could selectively serve files out of a package (essentially your web server would need to do all the compiling of archives when the request comes through). I think a better solution would be simple to allow for multipart HTTP requests and responses similar to the way emails are handled.
  • joebert
  • Fart Bubbles
  • Genius
  • User avatar
  • Posts: 13502
  • Loc: Florida

Post 3+ Months Ago

Quote:
First, you're saying you're interested in reducing connections, not requests.

Assuming that a server is using keep-alive and the browser supports it, hundreds of files can be served within a single connection through separate requests.


Bad choice of words on my part. I intended connections to be all-inclusive of connections and the requests being made.

Any way you say it, the situation is still analogous to making a dozen trips to the grocery store to pickup a single carton of eggs.
Whether Keep-Alive is used or not is like whether you keep driving the same car or get in a new car each trip.

Quote:
Handling of an HTTP request for a static resource should be inexpensive, provided that the connection has adequate bandwidth and the file isn't large. A small server can determine the file requested, determine the best method for serving it, and serve it, all within a few milliseconds and without using much memory. Even for thousands of concurrent requests on thousands of unique static files, a suitably equipped system can cache the files in memory and incur no performance penalty on reading files from the disk.


How many bytes do you reckon are used in an HTTP request/response for headers ?
Did you know Google uses a CSS Sprite for their result page logo and buttons/etc ?
Did you know that recently Slashdot, a site with a term named after it for taking sites down with traffic, started using CSS Sprites to reduce their load ?

Quote:
Rather than serving requests using a server properly configured to serve static files, you're using a server configured as a one-size-fits-all solution that is likely loaded up with several mods for authorization, caching, ssl, and PHP.


There's a lot of servers out there doing exactly that from what I've gathered while reading around.

Quote:
Just for kicks: Assuming we implemented some sort of file packaging architecture within HTTP, what happens with caching when a single file out of a package of 100 changes?


The same thing that happens when the CSS Sprite Google uses changes. The whole file is replaced.
If changes are often enough to increase resource usage, it's probably a good idea to rethink which files are in which packages. :)
  • effim
  • Beginner
  • Beginner
  • User avatar
  • Posts: 35
  • Loc: Austin, TX

Post 3+ Months Ago

joebert wrote:
Any way you say it, the situation is still analogous to making a dozen trips to the grocery store to pickup a single carton of eggs. Whether Keep-Alive is used or not is like whether you keep driving the same car or get in a new car each trip.


Not to nitpick, but we're talking about something inexpensive here. Driving to the store repeatedly consumes a vast amount of resources in comparison to the result. The HTTP headers consume some bandwidth, yes, but typically it's insignificant compared to the content being transferred. We still can't manage to get people to remove extra white space that accounts for several kilobytes from their HTML, CSS, and JavaScript files that go into a production environment, not to mention removing fundamentally useless server identifier tags that tend to suck up several hundred bytes.

I hardly think that we should implement new HTTP protocol for Google and Slashdot, personally. Google, for one, could simply utilize their Google Gears (Slashdot could too, for that matter) to store static files on the users machine and update them only when they need to be updated.

joebert wrote:
There's a lot of servers out there doing exactly that from what I've gathered while reading around.


I agree, and I think it's ridiculous, especially when it matters (like on Ozzu). Again, though, I don't think the solution is to create additional methods to solve it when a much more semantic one exists. To recycle your metaphor, these sites are using a large pickup truck to go get eggs instead of taking a scooter or a bicycle.

You didn't mention multi-part requests or responses. What are your thoughts on a system like that?
  • Bozebo
  • Expert
  • Expert
  • User avatar
  • Posts: 709
  • Loc: 404

Post 3+ Months Ago

effim poses an interesting argument. Though, the proposed technique would require changes made on the client side - and the package of resources to be gathered in one http request is just a normal archive file.
eg:
index.html
files.zip

files.zip contains the images and stylesheets (provided they are not dynamically produced).
And perhaps dynamic files could be external from the archive - so the server isn't re building it for every request.
  • effim
  • Beginner
  • Beginner
  • User avatar
  • Posts: 35
  • Loc: Austin, TX

Post 3+ Months Ago

I think the current HTTP specs might actually support a multipart response like I was mentioning...I'm digging in to see what I can find...

http://www.w3.org/Protocols/rfc2616/rfc ... l#sec3.7.2

and

http://www.motobit.com/tips/detpg_multi ... e-request/

Update

Apparently this is solidly supported, using a multipart/related mimetype in the HTTP response. Unfortunately, the RFC calls for the client to explicitly 'Allow' a multipart response in the headers. I want to give it a try a little later and see what I can come up with.
  • Bigwebmaster
  • Site Admin
  • Site Admin
  • User avatar
  • Posts: 9089
  • Loc: Seattle, WA & Phoenix, AZ

Post 3+ Months Ago

Currently the Ozzu server is doing fine, but I was just pointing out where most of the load is coming from. I agree with your point about using different configurations for static vs dynamic content, etc. Eventually down the road if ozzu gets big enough there would likely be server for static content, a server for dynamic content such as scripts, and a SQL server for all the database stuff. For now though all of that is not needed since I try to optimize everything I can and the server is still able to handle the load. I would classify it as a one-size-fits-all at the moment.

effim wrote:
We still can't manage to get people to remove extra white space that accounts for several kilobytes from their HTML, CSS, and JavaScript files that go into a production environment, not to mention removing fundamentally useless server identifier tags that tend to suck up several hundred bytes.


I think most people do not do it because they simply don't know any better. I use this:

http://developer.yahoo.com/yui/compressor/

to compress all of the CSS and Javascript on the site. If I recall that is saving about 40KB per user who visits the site. If your site doesn't get much traffic you probably don't need to worry about the nitty gritty details like that, but once you get enough visitors every little thing adds up.

30000 visitors x 40KB = 1.2 GB per day saved
  • Anonymous
  • Bot
  • No Avatar
  • Posts: ?
  • Loc: Ozzuland
  • Status: Online

Post 3+ Months Ago

Post Information

  • Total Posts in this topic: 42 posts
  • Users browsing this forum: spork and 43 guests
  • You cannot post new topics in this forum
  • You cannot reply to topics in this forum
  • You cannot edit your posts in this forum
  • You cannot delete your posts in this forum
  • You cannot post attachments in this forum
 
 

© 1998-2014. Ozzu® is a registered trademark of Unmelted, LLC.