That said, I really doubt there are going to be many suggestions from the crowd that the admins haven't already thought of. My "average" infrastructure looks something like this: SSL terminated at the load balancer(s), edge servers packed full of memory with varnish over nginx for cacheable content, passthrough to nginx for uncachable content backed by <insert favorite storage solution> and a CDN that grabs up anything it can after an initial request is made. That seems to be industry standard right now.
That's pretty close what we're doing except we don't have SSL yet and no CDN. Plus we only have a single nginx server right now.
It seems to me like they're doing everything right outside of having a continuity planning location. But I'd also venture that a second location is out of financial reach. They certainly don't PLASTER their site with advertisements. In fact, I'm at a loss for what their method of revenue generation is unless they're just really really chairitable. (In which case, we all should grovel at their kindness)
Grovel away. Right now all time spent programming is donated by me, all time spent on admin stuff is donated by Matt A and Matt S, all time spent managing mods and addressing everyday issues with the site is donated by Coco, and all bandwidth and hardware is provided free of charge by Matt A. Without each of those, the site wouldn't exist right now. Mods also donate their time and deserve thanks. We make well under $100/month from ads and donations combined (mostly ads), most of which has gone toward forming our LLC. The new site has some additional revenue streams planned and some additional ad locations, but my goal is still for them to be unobtrusive.
In my world, we build out an entire second location that mirrors our first. If we need to do any site maintenance, we kick all traffic over to the "cold" location, do our work, kick all traffic back to the original location then do the same maintenance to the "cold" location. Insanely expensive? You bet. Any customer downtime or lost ad revenue? Nope. This design is actually meant for emergencies like site1 getting nuked from orbit. The maintenance thing is just a handy biproduct. It's all about keeping the users unaware that there ever was a problem and keep the services and ad revenue rolling.
And hey, I know I'm a nobody on the forum here, but if anyone would like to reach out to me for any "well this is how we do it" sort of advice, please feel free.
I value input from people like you, who have an understanding of the proven methods for doing things. It validates our decisions and also provides guidance for future decisions, and in some cases points out where we've screwed up.
Unless we get some corporate sponsorship CDN hosting will not be used, but I'm building out the new site's API to handle it if we eventually switch over to that route. We're also planning on allowing people to host a reverse proxy if they have the bandwidth to handle it and then load balancing, even if it's extremely simple DNS round-robin load balancing. Part of that will be to allow (and require for new commercial projects) that large projects host their own reverse proxy for their traffic or help out the project by having their reverse proxy sit in our group of load balanced servers. The only other options would be to go to a completely premium service, which I never want to do, or get a large corporation to sponsor the site.