[TriEmbed] Personal supercomputing

kschilf at yahoo.com kschilf at yahoo.com
Fri Feb 6 22:40:27 CST 2015


Hi Triembed,

View from the HW trenches...  :-)

If you feel the need for speed, the two players would be GPU or FPGA (ultimately a custom ASIC if you have money or volume).

I am awed by the NVIDIA card in my desktop with its bank of GPU's.  It represents a tremendous amount of computation at a very low price, but I imagine that it is optimized for pixels or things that look like pixels.  I have no experience measuring its power consumption or programming it.

My bread and butter is FPGA's.  They offer the flexibility to implement numerous, tuned, parallel processing paths.  The user is coding the processing structure not just the instructions fed to a fixed processor.  They also bring amazingly versatile GPIO that offer wide ranging level and timing support (not a benefit to an application but a real benefit when connecting the FPGA to other hardware.)  FPGA power is in two parts, a fairly static power to keep the SRAM refreshed (ante in the poker game) and a dynamic power for the circuits that are switching.  The static power can be intimidating if ultra-low power is your goal.  Once you get going, the computation / power quotient is very attractive much, much better than a room full of CPU blades.  The key is insuring that the problem is well mapped to the FPGA.  There is a learning curve to programming them effectively, but I imagine that CUDA has a learning curve as well.

A good way to start is to code the algorithm in a high level language.  Profile the code and identify bottlenecks.

A Chevette and a Lamborghini will get you to the grocery store, but the track is a different matter.  :-)

Sincerely,
Kevin Schilf
Digital Telesis, Inc.
919 349 7730


--------------------------------------------
On Fri, 2/6/15, The MacDougals <paulmacd at acm.org> wrote:

 Subject: Re: [TriEmbed] Personal supercomputing
 To: crwest at ncsu.edu
 Cc: "'TriEmbed'" <triembed at triembed.org>
 Date: Friday, February 6, 2015, 7:59 PM
 
 I do know that 9 of the top 10
 Green supercomputers have GPUs attached (8 of those are
 Nvidia GPUs and one is AMD).The one without GPUs is not a
 CPU only system.  It has PEZY-SC coprocessors.  http://www.green500.org/lists/green201411  The newest Nvidia GPUs (Maxwell
 architecture) have significantly lower power requirements
 than previous generations.With high performance
 computing, you have to run your workloads to see if GPUs are
 the way to go.  If you are willingto play with the code, most
 applications can be tweaked to run much faster on GPUs than
 on CPUs.  The effort to program large
 parallel machines should not be underestimated.  But, it is
 the way of the future.If you would like to try out
 GPU computing, Nvidia has a “free trial” offer at the
 moment.http://www.nvidia.com/object/gpu-test-drive.html?2    ---> Paul      From: TriEmbed
 [mailto:triembed-bounces at triembed.org] On Behalf Of
 Nathan Yinger
 Sent: Friday, February 06, 2015 5:02 PM
 Cc: TriEmbed
 Subject: Re: [TriEmbed] Personal
 supercomputing
  Bitcoin miners have collected
 a fair number of performance comparisons for SHA1 hashes (https://en.bitcoin.it/wiki/Non-specialized_hardware_comparison#CPUs.2FAPUs).
 Just from skimming there doesn't seem to be a big
 difference in energy efficiency, but there is a big
 difference in capacity.What sort of time frame are
 you looking at? Depending on when you get 'sufficient
 experience', the costs could be very different. Also,
 the ease of development for FPGAs could change. I'm not
 knowledgeable, but I saw Adafruit selling an FPGA
 development board, so it looks like at least some barriers
 are coming down there.  Incidentally, I experimented
 with doing bitcoin mining during winters, but it seemed to
 increase my energy bill over the previous year. I don't
 know if that was the mining or because of the weather
 though.~Nathan  On Thu, Feb 5, 2015 at 1:13
 PM, Charles West <crwest at ncsu.edu>
 wrote:Hello,I'm looking into machine
 learning and it seems like some of the methods could
 potentially just keep getting better the more
 data/computational power you throw at them.  I'm
 not really skilled enough to do to much with them yet, but I
 wanted to go ahead and see what sort of setup it might be
 good to build once I have sufficient experience in the
 area.Wikipedia has a list of the
 most power/price computers in the world (http://en.wikipedia.org/wiki/Flops). 
 It is interesting to note that the last two entries are made
 from commodity PC parts (the latest coming in at $902.57 and
 delivering 11.5 TFLOPS).  It would seem that one way to
 go would be just build one of these servers with a top of
 the line GPU.The complicating factor is
 that the GPU is really power hungry and takes something like
 > .5 kilowatts to keep running.  At normal utility
 rates, this means that power is going to cost more than the
 system if it is kept operating continuously for more than a
 year.The other possible
 alternatives are building large clusters of Odroid C1s or
 Raspberry Pi 2.0s, each of which have quadcore arm
 processors and only take ~2.5 watts to run (equivalent power
 at 200 units).  At the same time, you could probably
 only have about 18 units at equivalent cost not counting
 energy.Lastly, you could just build a
 decked out CPU server.  I don't really know how
 they clock in in terms of power efficiency.A few questions:
 
 What would you build if you had something that would happily
 eat as many parallel flops as you could deliver (with
 correspondingly increasing performance)?Does anyone know how GPUs
 compare to CPUs in terms of power consumption per FLOP?
 
 At what point does the power cost dominate the computer cost
 (timescale, hours of expected operation, etc)?  
 
 Also, should this be our new standard way to heat the house
 during the winter?  Thanks,Charlie West  
 _______________________________________________
 Triangle, NC Embedded Computing mailing list
 TriEmbed at triembed.org
 http://mail.triembed.org/mailman/listinfo/triembed_triembed.org
 TriEmbed web site: http://TriEmbed.org
  
 -----Inline Attachment Follows-----
 
 _______________________________________________
 Triangle, NC Embedded Computing mailing list
 TriEmbed at triembed.org
 http://mail.triembed.org/mailman/listinfo/triembed_triembed.org
 TriEmbed web site: http://TriEmbed.org
 




More information about the TriEmbed mailing list