szsori wrote:Donations before that time were used primarily for expenses as well, but there were some left over and combined with more recent donations we're still under $400.
I would just like to point out that those $400 is over a period of more than a year.
c4ho wrote:I am confused. I know hosting used to cost money a while back but with the current hardware donations what currently runs to FAR more than $100 a month.?
Which bit do you find confusing? The hardware/bandwidth/electricity/support doesn't cost TheTVDB anything, but I assure you someone, somewhere, is still paying for it, and that's the core of what we are discussing - we are trying to find a way for these costs to be paid for by the site; meaning that TheTVDB LLC manages their own hosting rather than relying on third-parties for free.
c4ho wrote:Based on some quick calculations you could get THREE of these for that kind of money:
Like I pointed out earlier in this thread, that LeaseWeb server is not an option due to the below par processor and memory in it. They do have other alternatives that would better suit us, but since we already own
hardware, we don't need to rent/lease a virtual or dedicated server; we need rackspace in a datacenter with good bandwidth and peering agreements. Sure, LeaseWeb provides this also, but we are talking a lot more money that the 29 euro you considered.
click170 wrote:I can't help but thinking, and please don't take this as an attack against the developers [praise be with you], but how efficient is the codebase if it requires that kind of horsepower behind it? Which is it that is taking up so much memory and or CPU cycles? You mention most of the load is for API calls, is there any caching in place for the most recent or most popularly pulled episode? Is it the construction and compression of the xml files?
You have a point, and you are right on track; the current codebase is bringing the server to a standstill - this will obviously be bettered during the redesign, but as we are looking to go back to a fully dynamic (rather than the currently employed static file) system any improvements we make to the code as such will be reversed by the fact that we are no longer serving out cached XML files.
To expand on this a little - the XML (or ZIP) files we are currently serving out is generated as and when they are needed and cached on disk indefinitely. The only time you (an API user) hits the database is during initial series lookups and smaller ratings/favorites queries. This works fairly well, but it means we (the third-party developers) can no longer ask the site to hand us all the data on series A, B, C and D as well as episodes 1-20, 48-56 and actors X, Y and Z - that is to say; we can, but it has to be done "manually" in several calls. Going back to a fully dynamic system all the above could be queried with a single (or possibly three) call(s) to the site - and at the same time it would allow for true personalization of the returned dataset.
What's actually eating CPU at the moment is a combination of Apache, MySql and the various scripts running that creates the cached XML/ZIP files. In addition there is the image mirror script which, when it wakes up, is fairly processor intensive.
What you have to remember though is that we serve millions of pages per day, so the fact that we need fairly hefty hardware shouldn't come as a shock to anyone.
click170 wrote:Could I get you to elaborate on your hesitance towards clustering? I understand from a programming perspective, that does seem like a rather daunting task, but, planning for the future growth of the site, shouldn't it be assumed that we will eventually need to move to a clustering system? And perhaps, one day, ideally, entirely decentralized?
We kind of are planning for it, and it will [in a way] be a completely decentralized system. Basically, once the restructure is done and dusted we should be able to fire up another server in no-time and pull the plug on the original server without anyone (apart from the actual admins) knowing anything about it. With a little more magic we should be able to provide geographical mirrors also, which ensures high speed transfers all around the world.
click170 wrote:Also, I think buddy has a point about the wiki-style donations bar .... a really good idea would be asking the developers to be aware of this bar and to try and implement it similarly in their applications somehow. If people had a visual indication, "omg, TheTVDB is going to go down if I don't donate!" I think their reaction might be different to the current simply donation link.
The problem is that the majority of traffic comes in thru the backdoor (i.e. the API). Yes, we could ask the third-party developers to add a visual indication to their applications, but would it really have much of an effect? When I sit down infront of the HTPC I sit down to watch TV (or a movie, or browse some photos etc) and not to worry about where my TV listings come from or if they need some money to keep delivering those listings to me. Even if I got a huge on-screen popup saying "Donate now or the site will go down in a week" I would probably close it quickly and continue watching the last episode of Lost. I reckon most people [actually using HTPC's with a TheTVDB compatible frontend) are the same.
If we were able to donate from
the HTPC frontend (rather than having to use an actual PC) it would likely give more of an effect, but unless all the third-party developers spend time developing their own payment system or PayPal gateways [for us - for free] it's not going to happen.I find it interesting that there is such as small number of people posting here. Is it safe to assume from that fact alone that most people do not have a problem with what we are proposing (i.e. certain premium services available for a small fee)?
In any case, history has shown that we cannot rely on donations alone. Google ads help a little, but for them to be really effective we need to drive more people to the site (something we are looking into as well), but in order to do this we need to provide more services/features for these people to utilize and more more traffic == more processors/memory needed == more bandwidth consumed == more overall cost.
I really don't want to compare this [relatively small site] with larger ones, but there are plenty of [large community] sites out there trying to survive on advertising alone and if they cannot do it ... well, you do the math
c4ho wrote:Thats a ridiculous price to pay for bandwidth. I have one server using in excess of 5TB of month and it costs me 60 Euros. I feel now we are getting to the crux of it . Bandwidth is VERY cheap but the current codebase backend requirements are so specific we cant make best use of good deals out there.
Can I just ask how many hits per second that server is getting? And please dont tell me it's your seedbox because that needs next to no CPU and/or memory (heck, I've run bt clients on my Fonera wifi AP succesfully).
Yes, bandwidth is cheap (I can soon get a 100Mbps pipe installed in my office for ~$40/month, 1Gbps pipes are also on the way apparently, for roughly the same price) - but running a dynamic website receiving millions (yes millions, as in 7 figures) of hits per day is demanding on the hardware and that has nothing to do with the codebase being "specific". And it's not just initial hardware, it's also ongoing replacement of that hardware. If a drive fails it needs to be replaced, if a fan fails that needs to be replaced, if lightning strikes and kills the entire server the entire server needs to be replaced. What about physical access to the server? iLO access?