The Strategic Importance of Technical Computing Software

Published by: Srini Chari

Beyond sticking processors together, Sticky Technical Computing and Cloud Software can help organizations unlock greater business value through automated integration of Technical Computing assets – Systems and Applications Software.

Most mornings when I am in Connecticut and the weather is tolerable, I usually go for a jog or walk in my neighborhood park in the Connecticut Sticks. One recent crisp sunny fall morning, as I was making my usual rounds, I got an email alert indicating that IBM had closed its acquisition of Algorithmics – a Financial Risk Analysis Software Company and this would be integrated into the Business Analytics division of IBM. This along with a recent (at that time) announcement of IBM’s planned acquisition of Platform Computing (www.ibm.com/deepcomputing) sparked a train of thoughts that stuck with me through the holidays and through my to-and-fro travel of over 15,000 miles to India and back in January 2012. Today is February 25, 2012 – another fine day in Connecticut and I just want to finish a gentle jog of three miles but made a personal commitment that I would finish and post this blog today. So here it is before I go away to the Sticks!

Those of you who have followed High Performance Computing (HPC) and Technical Computing through the past few decades as I have may appreciate these ruminations more. But these are not solely HPC thoughts. They are, I believe, indicators of where value is migrating throughout the IT industry and how solution providers must position themselves to maximize their value capture.

Summarizing Personal Observations on Technical Computing Trends in the last Three Decades – The Applications View

 
 
My first exposure to HPC /Technical Computing was as a Mechanical Engineering senior at the Indian Institute of Technology, Madras in 1980-1981. All students were required to do a project in their last two semesters. The project could be done individually or in groups. Projects required either significant laboratory work (usually in groups) or significant theoretical/computational analysis (usually done individually). Never interested in laboratory work, I decided to work on a computational analysis project in alternate energy. Those were the days of the second major oil crisis. So this was a hot topic!

Simply put, the project was to model the flame propagation in a hybrid fuel (ethanol and gasoline) internal combustion engine using a simple one dimensional (radial) finite-difference model to study this chemically reacting flow over a range of concentration ratios (ethanol/gasoline: air) and determine the optimal concentration ratio to maximize engine efficiency . By using the computed flame velocity, it was possible to algebraically predict the engine efficiency under typical operating conditions. We used an IBM 370 system and those days (1980-1981) and these simulations would run in batch mode in the night using punched cards as input. It took an entire semester (about four months) to finish this highly manual computing task for several reasons:
 
 

  1. First, I could run only one job in the night; physically going to the computer center, punching the data deck and the associated job control statements and then looking at the printed output the following morning to see if the job ran to completion. This took many attempts as inadvertent input errors could not be detected till the next morning.
  2. Secondly, the computing resources and performance were severely limited. When the job actually began running, often it would not run to completion in the first attempt and would be held in quiescent (wait) mode as the system was processing other higher priority work. When the computing resources became available again, the quiescent job would be processed and this would continue multiple times until the simulation terminated normally. This back and forth often took several days.
  3. Then, we had to verify that the results made engineering sense. This was again a very cumbersome process as visualization tools were still in their infancy and so the entire process of interpreting the results was very manual and time consuming.
  4. Finally, to determine the optimal concentration ratio to maximize engine efficiency, it was necessary to repeat steps 1-3 over a range of concentration rations.

By that time, the semester ended, and I was ready to call it quits. But I still had to type the project report. That was another ordeal. We didn’t have sophisticated word processors that could type Greeks and equations, create tables, and embed graphs and figures. So this took more time and consumed about half my summer vacation before I graduated in time to receive my Bachelor’s degree. But in retrospect, this drudgery was well worth it.

It makes me constantly appreciate the significant strides made by the IT industry as a whole – dramatically improving the productivity of engineers, scientists, analysts, and other professionals. And innovations in software, particularly applications and middleware have had the most profound impact.

 
 
So where are we today in 2012? The fundamental equations of fluid dynamics are still the same but applications benefiting industry and mankind are wide and diverse (for those of you who are mathematically inclined, please see this excellent 1 hour video on the nature and value of computational fluid dynamics (CFD) - https://www.youtube.com/watch?v=LSxqpaCCPvY .

We also have yet another oil crisis looming ominously. There’s still an urgent business and societal need to explore the viability and efficiency of alternate fuels like ethanol. It’s still a fertile area for R&D. And much of this R&D entails solving the equations of multi-component chemically reacting, transient three dimensional fluid flows in complex geometries. This may sound insurmountably complex computationally.

But in reality, there have been many technical advances that have helped reduce some of the complexity.
 

  1. The continued exponential improvement in computer performance – at least a billion fold or more today over 1981 levels – enables timely calculation.
  2. Many computational fluid dynamics (CFD) techniques are sufficiently mature and in fact there are commercial applications such as ANSYS FLUENT that do an excellent job of modeling the complex physics and come with very sophisticated pre and post processing capabilities to improve the engineer’s productivity.
  3. These CFD applications can leverage today’s prevalent Technical Computing hardware architecture – clustered multicore systems – and scale very well.
  4. Finally, the emergence of centralized cloud computing (https://www.cabotpartners.com/Downloads/HPC_Cloud_Engineering_June_2011.pdf ) can dramatically improve the economics of computation and reduce entry barriers for small and medium businesses.

One Key Technical Computing Challenge in the Horizon

 
 
Today my undergraduate (1981) chemically reacting flow problem can be fully automated and run on a laptop in minutes – perhaps even an iPad. And this would produce a “good” concentration ratio. But a one-dimensional model may not truly reflect the actual operating conditions. For this we would need today’s CFD three dimensional transient capabilities that could run economically on a standard Technical Computing cluster and produce a more “realistic” result. With integrated pre and post processing, engineers’ productivity would be substantially enhanced. This is possible today.

But what if a company wants to concurrently run several of these simulations and perhaps share the results with a broader engineering team who may wish to couple this engine operating information to the drive-chain through the crank shaft using kinematics and then using computational structural dynamics and exterior vehicle aerodynamics model the automobile (Chassis, body, engine, etc.) as a complete system to predict system behavior under typical operating conditions? Let’s further assume that crashworthiness and occupant safety analyses are also required.

This system-wide engineering analysis is typically a collaborative and iterative process and requires the use of several applications that must be integrated in a workflow producing and sharing data. Much of this today is manual and is one of today’s major Technical Computing challenge not just in the manufacturing industry but across most industries that use Technical Computing and leverage data. This is where middleware will provide the “glue” and believe me it will stick if it works! And work it will! The Technical Computing provider ecosystem will head in this direction.

Circling Back to IBM’s Acquisition of Algorithmics and Platform Computing

 
 
With the recent Algorthmics and Platform acquisitions, IBM has recognized the strategic importance of software and middleware to increase revenues and margins in Technical Computing; not just for IBM but also for value added resellers worldwide who could develop higher margin services in implementation and customization based on these strategic software assets. IBM and its application software partners can give these channels a significant competitive advantage to expand reach and penetration with small and medium businesses that are increasingly using Technical Computing. When coupled with other middleware such as GPFS and Tivoli Storage Manager and with the anticipated growth of private clouds for Technical Computing, expect IBM’s ecosystem to enhance its value capture. And expect clients to achieve faster time to value!

Leave a Reply

Your email address will not be published. Required fields are marked *