[TriEmbed] Personal supercomputing
Nathan Yinger
npyinger at ncsu.edu
Fri Feb 6 16:01:57 CST 2015
Bitcoin miners have collected a fair number of performance comparisons for
SHA1 hashes (
https://en.bitcoin.it/wiki/Non-specialized_hardware_comparison#CPUs.2FAPUs).
Just from skimming there doesn't seem to be a big difference in energy
efficiency, but there is a big difference in capacity.
What sort of time frame are you looking at? Depending on when you get
'sufficient experience', the costs could be very different. Also, the ease
of development for FPGAs could change. I'm not knowledgeable, but I saw
Adafruit selling an FPGA development board, so it looks like at least some
barriers are coming down there.
Incidentally, I experimented with doing bitcoin mining during winters, but
it seemed to increase my energy bill over the previous year. I don't know
if that was the mining or because of the weather though.
~Nathan
On Thu, Feb 5, 2015 at 1:13 PM, Charles West <crwest at ncsu.edu> wrote:
> Hello,
>
> I'm looking into machine learning and it seems like some of the methods
> could potentially just keep getting better the more data/computational
> power you throw at them. I'm not really skilled enough to do to much with
> them yet, but I wanted to go ahead and see what sort of setup it might be
> good to build once I have sufficient experience in the area.
>
> Wikipedia has a list of the most power/price computers in the world (
> http://en.wikipedia.org/wiki/Flops). It is interesting to note that the
> last two entries are made from commodity PC parts (the latest coming in at
> $902.57 and delivering 11.5 TFLOPS). It would seem that one way to go
> would be just build one of these servers with a top of the line GPU.
>
> The complicating factor is that the GPU is really power hungry and takes
> something like > .5 kilowatts to keep running. At normal utility rates,
> this means that power is going to cost more than the system if it is kept
> operating continuously for more than a year.
>
> The other possible alternatives are building large clusters of Odroid C1s
> or Raspberry Pi 2.0s, each of which have quadcore arm processors and only
> take ~2.5 watts to run (equivalent power at 200 units). At the same time,
> you could probably only have about 18 units at equivalent cost not counting
> energy.
>
> Lastly, you could just build a decked out CPU server. I don't really know
> how they clock in in terms of power efficiency.
>
> A few questions:
>
> What would you build if you had something that would happily eat as many
> parallel flops as you could deliver (with correspondingly increasing
> performance)?
>
> Does anyone know how GPUs compare to CPUs in terms of power consumption
> per FLOP?
>
> At what point does the power cost dominate the computer cost (timescale,
> hours of expected operation, etc)?
>
> Also, should this be our new standard way to heat the house during the
> winter?
>
> Thanks,
> Charlie West
>
>
>
> _______________________________________________
> Triangle, NC Embedded Computing mailing list
> TriEmbed at triembed.org
> http://mail.triembed.org/mailman/listinfo/triembed_triembed.org
> TriEmbed web site: http://TriEmbed.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.triembed.org/pipermail/triembed_triembed.org/attachments/20150206/3c3cee8c/attachment.htm>
More information about the TriEmbed
mailing list