Friday, December 5, 2008

The Grid is Dead: Follow-Up

I posted earlier about Arnie Mile's presentation at the Sun HPC Consortium meeting in Austin before SC08, this is the promised follow-up now that I have a copy of the slides.

Arnie's main point is that while the grid is dead, the need and associated issues remain. There is a still a need to share resources across administrative domains, and that raises a host of challenges (registration, discovery, security, etc). The current effort to build a one-size-fits-all solution (ie Globus) has worked for a handful of large government sites, but otherwise has been largely a failure, thanks to "nearly insurmountable scalability and complexity issues."

Enter Thebes, a consortium lead by Arnie and Georgetown aimed at addressing this situation, and will initially focus on authentication, authorization, and accounting. So far, they've published a first draft of a Resource Description Language (RDL), which is a single, harmonized language to describe jobs and resources. Slated for summer 2009 is a draft of a Resource Discovery Network (RDN), which is a robust, globally distributed peer to peer and hierarchical resource discovery network.

The consortium is actively seeking help, primarily in the form of developers, to continue their work.

Wednesday, December 3, 2008

20 Years of SC


This being the twentieth anniversary of SC, they had a large exhibit in one corner of the convention center with some pretty interesting items. Each year of the conference had a display, including technology from the time as well as items from the conference.

A main attraction at the exhibit was a Cray-1 from the mid 70's, originally installed at Los Alamos. This system cost ~$8M and provided 80 MFLOPS of processing power. The MacBook Pro in my backpack delivers more than 200 times the processing power (16 GFLOPS), and that is only 1/100,000th the power of the current leading supercomputing cluster (RoadRunner).

There were also components from later Crays (the X1, from ~five years ago)

as well as a board from BlueGene-L from the San Diego Supercomputer Center.

They had some posters with stats on the conference, it's interesting to note that conference attendance was relatively flat during the ten years from 1991-2001, but that it's roughly doubled since then. Another chart documented an increase of five orders of magnitude in both processing power and number of CPUs over the twenty years.

My SC08 Gig


I showed up a little early for my SC08 gig on Wed afternoon, as Woody Woodruff from IBM, who was playing before me, was listed as playing country music. And indeed he was - a selection of country and old bluegrass material. He kindly accepted my offer to sit in on a few when I knew the tune to help out with harmonies and some guitar solos, which was fun.

When my set rolled around, Woody hung on and played for a while with me, helping out with rhythm guitar and harmonies. I then finished up the set with some solo material. Glen was good enough to snap the picture at the top, thanks Glen!

I was followed up by a wonderful group playing classical. The music room made for a tremendously civil and relaxing addition to SC, and I went and talked to Vivian Benton,

the wonderful organizer of the room, about making it a permanent addition to SC. She said she was already there, and had requested that there be another music room at SC09, which will be in Portland.

Thursday, November 20, 2008

Visualization Gets Touchy-Feely at SC08

SC has always had some of the coolest visualization displays on the planet, and this year continues that trend. This year saw a big jump in interactive multi-touch systems.

This is the Lambda Table from the Electronic Visualization Laboratory (evl), "an interdisciplinary graduate research laboratory that combines art and computer science, specializing in advanced visualization and networking technologies."
One of the applications was created for the Minnesota Museum of Science to teach students how rain and water flows over terrain (apparently there is a wide-spread misconception that water flows south, instead of down). In this case, when you touch the screen, it rained on that portion of the landscape, and the water then flowed down the terrain. Another application on the device was a view of the rat brain cortex, which could then be zoomed in and out to a fairly high level of magnification.

There was also some of the more traditional techniques, where you manipulate the visualization with a handheld controller and glasses; in this case it is wireless...

and in this case it is a 3D joystick with feedback. The setup was built by the High-Performance Molecular Simulation Team with the Computational Systems Biology Research Group from RIKEN in Japan, and is a looking at protein folding. It includes a display in the back showing a protein and a target drug molecule in 3D, and allows the user (wearing special 3D glasses) to move the drug molecule around and in and out of the larger protein. All the while the joystick is providing feedback on the interactions between the drug and the protein, giving resistance and bumps as the molecules collide.

Cyviz was showing a gorgeous large display with 3650 by 1050 resolution generated by DLP projectors and their special blending technologies to provide a seamless image.
Next is a beautiful 64" prototype LCD monitor from Sharp in the San Diego Supercomputer booth, with 4096 x 2160 resolution.


















There were a couple of other multitouch tables, one from the University of Amsterdam










and another from RENCI. The RENCI folks said it took them about six months to put their hardware and software together.

Wednesday, November 19, 2008

The SC08 Green500 BoF


The SC08 Green500 BoF was led by Wu Feng from Virginia Tech. He started with a brief history for perspective. He then launched into the issues surrounding the creation of the list, a discussion of which is the purpose of the BoF. There were brief presentations from several vendors, folks from analyst IDC and journal HPCWire, and a couple of users.

I hadn't really intended to go to this session but tagged along with Glen (ironically because I needed to find power for my laptop). I tried to listen to the conversation, which got pretty energetic at times, from the perspective of a CIO/decision maker, and what they would want from the Green500. Here are my thoughts:
  • A single, simple number will be hard to deliver. There is simply too great a range of uses for HPC, both in terms of size and computational problem, to have a single number mean much of anything. CIOs want a single number, but will also understand the complexity of the situation, and might even be skeptical of a single number as overly simplified. Several in the audience suggested creating segments or classes - I think this is appropriate, probably based on the size of the system in Tflops.
  • The number should not have anything to do with cost or "value". While the vendors may want that, the Green500 should not evalutate value - let IDC, HPCwire, and ultimately the buyers do that.
  • The major challenge is sorting out what is part of the data center facility and what is part of the system. I don't see an easy answer here. With the CIO hat on, I would say leave the facility out of it, as that's a separate issue with separate challenges and opportunities (ie tax incentives for green buildings, etc). Just tell me how much electricity the system takes, and how much heat it rejects. If you want to try to improve your score by taking on some of the facility issues, I'm guess I'm okay with that as I expect you will be able to improve your score, but at a greater cost (which again I will evaluate on my own).
  • No CIO will make the decision to spend 5-8 figures on a Green500 ranking alone. They will ask their people, what can the system do, and what will it cost to acquire and operate it? The main gold of the Green500 should be to help vendors and IT staff have a conversation about energy efficiency. It should also help provide credibility to the final purchasing recommendation that the CIO sees. The CIO should be ensuring that the system meets the compute needs of the organization in a cost-effective and environmentally responsible way.
It was a very interesting discussion to listen to, and I'll be interested to see where things stand a year from now.

SC08 Data Lifecycle Management BoF


The Data Lifecycle Management: ILM in an HPC World BoF at SC08 Tuesday afternoon was an interesting discussion of the issues surrounding the storage, archival, and management of large data sets. Sponsored by Avetec's Data Intensive Computing Environment (DICE), it was led by Tracey Wilson, Computer Sciences Corporation and DICE and Ralph McEldowney, Air Force Research Laboratory Major Share Resource Center.

Many in the audience, including folks from NASA and other large sites, had years of experience with this - they were past the initial challenges of simply get the capacity in place, and were now struggling with trying to figure out what data needs to be kept for how long, who owns it, how to find it back, how to continue growing the service with something resembling an affordable cost. At the heart of the problem is the need for users to tag their stored data with metadata that describes such things as the owner, contents, projected size, expected lifespan, etc. Major HPC sites are finding that storage needs are eating into funding originally aimed at HPC capacity, thus potentially limiting the compute resources available for research.

They listed some of the more commonly used solutions today in this general space, although each has different levels of functionality when it comes to handling the metadata associated DLM.

The group seemed to agree that user training was a major goal - users need to understand the critical importance of associating metadata with their storage. Some expressed frustration in getting agreement on this, however - there was discussion of how to provide incentives to drive the right behavior, as well as the suggestion of deleting any data after a period of time that does not have associated metadata.

One site provides disincentives through increased fees for inefficient use of storage - space that is requested but not utilized is charged at a higher rate, as is data that is retained beyond a certain timeframe.

There was general consensus that establishing and enforcing policy around data management was a must - the example of eDiscovery was brought up as justification for properly classifying data so that it could be located easily during the discovery phase of a civil or criminal legal case. Failure to be prepared for eDiscovery can cost as much as $15M for a single case. One person noted that policy was relatively useless if developed by IT - it should be driven by the business needs of the organization, ie the records management and legal needs, working in partnership with IT. Another person commented that there is high likelihood of a failure of the commons in the effort to develop policy if this is not driven from the top down. Most researchers will be certain that there is no risk to themselves re: eDiscovery, so have no personal motivation to participate.

SC08 Music Room


I swung by the SC08 Music Room yesterday, as I'm scheduled to play there today (Wed) from 4:00-4:45. I was fortunate to catch a pro, Darcie Deaville, who was there with sponsorship from enParallel. It's a great room, very nicely done, and Darcie sounded great playing a variety of tunes on guitar and fiddle. She had been planning to bring a hurdygurdy, but it didn't make it. She did however have two fiddles in tow, one of which she keeps tuned to an opening tuning (F?) that she plays with drone strings to give a similar feel to a hurdygurdy.

From the SC08 Showfloor


I started my usual walk through the showfloor yesterday after the keynote to get the lay of the land and try to ID interesting talks at some of the lab booths. The FermiLab booth always has cool stuff, last it was a cloud chamber where you could see the trails of particles. This year they have an interesting sculpture modeled on a dark matter detector. The actual unit is under ground, however it is approximately the same size as this. So far they haven't detected much and are working to make the detection wafers larger.

Update 11/20/08 9:05: I should have noted they are looking for weakly-interacting massive particles (WIMPs), and there is an interesting story over at Wired from yesterday reporting another group of researchers on the same hunt seem to have found something interesting.


nVidia is getting a lot of attention for their "personal" supercomputer. The numbers look great, but one has to keep in mind that, as Glen points out, these are not general purpose HPC systems.

Rich Brueckner over at Sun (which seemed to be one of the busier booths) was in chaps and ready to ride, but with good reason - he had brought the Java Chopper to the show:

It's interesting to note in Rich's blog that he is looking to add a SunSpot to the bike...

These are very interesting little devices coming out of Sun Labs, the research arm of Sun. They have a variety of sensors and I/O options (accelerometers, light detectors, temperature sensors, LEDs, push buttons and general I/O pins), they run Java, they automatically form wireless mesh networks to communicate, and they can run for months on a single battery charge. A Java chopper with a Java-embedded SunSpot cranks the geek meter up pretty high...

which then quickly thwacks back to the zero pin across the aisle at the Microsoft booth. They were working a golf theme, obviously, and had a video golf course setup:

My guess is that this attracted mostly sales people from other vendors, as opposed to HPC customers, who don't as a group strike me as major golfers. I don't think Microsoft marketing has quite the right line on the SC crowd... my condolences to the MS booth staffers who have to toe the line, to their credit they were doing the best they could with what they had to work with.

Tuesday, November 18, 2008

Live Blog of the SC08 Keynote - Michael Dell


Here we go with the Welcome and Keynote for SC08. Michael Dell is the keynote speaker this year.

Patricia Teller from UTexas is the General Chair of the conference, and she welcomed the audience with the news that the conference is setting a number of records this year, including the number of exhibits, education attendees, and a total of more than 10,000 attendees this year.

This is the 20th anniversay of SC, and a video retrospective included a clip of the Seymour Cray keynote, where he joked that his 10" slide rule was the leading edge of computing power.

9:07 - There have been nine companies who have been involved in sponsoring all 21 SC conferences: Cray, HPCWire, IBM, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, NASA, NEC, Numerical Algorithms Group, and Sun Microsystems.

9:10 - Michael Dell is introduced.

Michael says it takes 20 Pflops to model human brain, which runs on 20 watts(!). Japanese are spending $1.8B on a 10 Pflop system.

9:18 - Dell is announcing today they are extending their partnership with nVidia to add Tesla cards to their precision workstations, which will deliver one Tflop to the desktop.

9:19 - 489 of the Top500 supercomputers are based on the x86 architecture.

9:21 - Video of the vis wall at TACC, which is powered by Dell gaming systems. They are using it to support a cancer researcher who noted that we have reached a point where we can generate 250 TB of data for a single cell. The picture is part of a rotating 3D model from the cancer research.

9:26 - Technology advancements 0ver the last five years - you get more Tflops with 90% fewer systems. We need to build petascale software to take advantage of petascale hardware.

9:30 - 70% of HPC budgets go to staffing and facilities.

9:31 - Facebook utilizes 10,000 Dell servers.

9:36 - Brief video - review of the various generations, baby boomers, X, Y... the next generation is the regeneration.

9:37 - Q & A - Michael clarifies that his reference to the brain at the beginning was not to suggest that we need to recreate the brain, but simply to compare current HPC to the brain. "There are enormous opportunities to improve the man-machine interface."

9:43 - Question about his vision for education. He responds that public education needs to be delivering students familiar and at ease with 21st century technology - "it's as important as reading, writing, and arithmetic."

9;46: Question about alternative energy. Normal desktop uses $120 in energy a year, new Dell system uses $6 year. This isn't a source of new energy, but it does dramatically reduce the demand. Also, Dell is one of the first companies to become carbon neutral - you'll see companies begin to compete in this area.

9:50 - Another energy question - how to push component vendors to improve? Dell's suppliers report back on their carbon emissions, it's one of the things that Dell evaluates them on.

9:51 - Keynote is done.

Monday, November 17, 2008

Best Conference Gear Award

My neighbor in the SAM-QFS BoF session gets the award for best conference gear. Finding power in these hotel conference rooms is a constant battle - this little unit has three plugs and two USB jacks. Outstanding!

Glen is Blogging!


I guess Glen (that's his left hand) got sick of my nagging (or was it encouragement?) and is also blogging over at Exploding Frog. He has a slightly different perspective on events, as he actually knows how to use HPC instead of just spew off acronyms and jargon, so make sure you add him to your RSS feed as well.

SAM-QFS BoF Session: Great for Backup Too


Harriet Coverston, Sun Distinguished Engineer and architect of Sun's distributed SAN file system, led a Birds of a Feather session with users, Sun staff, and third party vendors on Sun Storage Archive Manager and QFS. It was interesting to hear from current users at Clemson, the Arctic Region Supercomputer Center, Mississippi State, and TACC, who are all putting these solutions to substantial use.

Clemson is using SAM-QFS not only for home directories and moving large data to their cluster (which runs Lustre internally), but also as a substantial addition to their backup environment. The SAM-QFS approach to creating copies of file as they come into the system position it to be much more scalable for very large storage. It also makes for much faster recoveries - in one test they were able to restore 9 million files in nine hours, when a test restore with their traditional backup system was still running at 24 hours. This is reminiscent of conversations we've had with the Friedrich Miescher Institute for Biomedical Research, who are also using SAM-QFS as a add-on to traditional backup services.

The fact that numerous users we have talked to have come to utilize SAM-QFS to solve backup challenges in their environments speaks well to the software's data protection approach.

Jim Pepin: Arpanet < iPhone


Jim Pepin, CTO at Clemson, had some thought-provoking points about the challenges facing research environments. Similar to some of Ray Kurzweil's analysis about the rates of change for technology, Jim compared technology today to his start in the field in the 1970s, and noted that there has been five to seven magnitudes of growth in storage, compute and networking capacities over that time. He noted that his iPhone has more compute and storage capacity than the entire Arpanet did in the 1970s.

He also got a round of unsolicited applause from the crowd this morning for pointing out the value of SAM-QFS and chiding Sun spending too much time talking about ZFS, and not giving more attention to the widely used software for large scale archival storage.

Jim said that storage is in many cases moving closer to the user, with GBs of storage in laptops or in pockets on phones and USB drives. That conflicts with organizational drives to do a better job of organizing and protecting data, and vendor pushes to move data and systems into the cloud. He noted that the speed of light is not a suggestion, it's a law - we can't change it to make clouds work better. He went on to suggest that campus IT has come to think of itself as plumbers and not innovators, and that trend needs to reverse if we are going to successfully address these challenges. He listed several efforts under way at Clemson, including the broadening of IT support beyond the traditional Helpdesk out to power users and user groups.

Now It's Official: The Grid RIP


It's official now, Arnie put it on the big screen. I was too busy helping with his demo to take notes on the talk, but he's going to send me the slides, so I'll update this post with more later.

Where's "Head" Bubba?



Glen and I have been wondering where "Head" Bubba is... he has been at recent Sun HPC Consortium meetings, and as much as we're struck by his presence, we're even more struck by his absence this year. Enough so that we felt compelled to learn more.

To give you some sense of why this is interesting, this gentleman is the VP of IT Research and Development at Credit Suisse First Boston, has more than a passing resemblance to one of the greatest singers of all time, and who's badge (and it turns out legal name) is "Head" Bubba.

Turns out the more you look, the more interesting it gets. I hope his absence from this year's conference isn't the result of the downturn in the economy, or worse the impact of seven barrels of Jack Daniels. As someone who has kept kegs in the basement for the last 20 years, I know first hand that surviving large quantities of good beverages in easy reach is all about pacing. Plus I'm a bourbon fan (Basil Hayden's), so I'll have to ask him more about his experiences with the great Tennessee whiskey next time I see him.

Update 11/20/08 16:44 - He's Here After All - I ran into HB this afternoon outside one of the conference sessions, introduced myself and had a brief conversation with and one of his colleagues. "Head" doesn't drink, but gives away the bottles of Jack to people with creative technology ideas - he mentioned a couple of recent ones (individuals at Mellanox and I think Stanford), which I think is really cool. He suggested that maybe I could come up with an idea and get a bottle, which I said I thought was unlikely. I mentioned that I am a homebrewer, and his colleague suggested that perhaps I could improve on the venerable VAXtap, and I said I thought that was more in my league. I'll have to keep my eyes peeled for an old half rack coming out of the data center.

Update 7/1/11 - HB and JD - I haven't been back to SC since 2008, however I got email from HB this morning, looks like I've qualified for a bottle from his latest barrel! I haven't made any progress on the VAXtap, inspiring as the recent New York Times article about the surge in popularity of home brewing might be.

Sunday, November 16, 2008

Tour of Ranger - Loud and Windy



The Ranger cluster at TACC, the largest public research cluster in the world at 579 Tflops, is not your typical data center tour. The system took $30M and two and a half years to implement, with total costs of $60M over four years. Two sides of the data center have glass walls, making for a nice showcase.

Before going in, Glen gives us a tour of one of the blades. The cluster has almost 4000 of these, for a total of 15,744 processors, 62,976 cores, and 123 TB of memory.

When you enter the room, the immediate sensation is of an intense environment. It's very loud, and in the rows, very windy. The APC chillers I'm in front of are on each side of the blade racks, returning cooled air from the enclosed hot aisle. Unfortunately I don't have enough hair to really give you a true sense of it.

Here we see six SunBlade 6000 chassis in two racks, each with twelve blades. Storage racks with 1.7 PB of capacity coming from Sun Thumpers run the Lustre filesystem are at the ends of each aisle. Data can only stay on the system for 30 days, as they are creating 5-20 TB/day.

These are power distribution units (PDUs) for the cluster, which draws 2.4 Mw at peak load, or enough power for 2400 typical residential homes. At $.06/KwH, the annual power bill is ~$1M/year. There are no UPSes or generators for the system, though they are planning to add UPSes for the storage and network fabric. Also, notice the floor vents - there is room-based HVAC in addition to the in-row systems to manage humidity and the larger environment.

The cluster itself takes 2000 sf, however there is another 1500 sf needed for PDUs and chillers. The room overall is 6000 sf.

The enclosed hot aisles allow for much greater efficiency in cooling the tremendous heat load. The fire suppression is water sprinklers (dry pipe pre-action), however they had to add more smoke sensors and alarms to deal with the enclosed aisles, as there was concern someone working inside the closed aisle wouldn't hear or see an alarm.

This is one of two massive Magnum Infiniband switches, the world's largest such device, with 3456 non-blocking ports. It is so dense that engineering the cabling was a major challenge.

Ranger supports over 1500 users and 400 research projects, and has handled more than 300,000 jobs for a total of some 220,000,000 CPU hours. There are larger clusters out there, but Ranger gets kudos for its density, and for putting together a very highly performing system with a considerably smaller budget than the DOE labs.