After the pandemic…what now?

What is there to say that has not already been said? Well, maybe a bit of personal reflection is in order.

Mid-February of the year 2021, near the end of the COVID-19 pandemic of 2020.

My son, George, anxiously awaits decisions from medical schools that he has applied to and had interviews with. The University of Alabama at Birmingham (UAB) remains a fantastic option for him, as he has applied to the UAB Medical Scientist Training Program (MSTP), where he would be able to attain both his PhD and MD degrees.

The interview at UAB went fantastic for George. The head of UAB MSTP admissions seems to really like George quite a bit. It would be the thrill of my life to see George matriculate to UAB in the fall of 2021 and begin that academic and intellectual journey.

It’s been a LOOONNNNNNGGG Time!

Wow…I can’t believe that so much time has passed since I was actively posting to this blog.  I had a lot to say on this topic when I was writing feverishly about this topic of semiconductor workflows.  In many ways, since I was writing on this topic, I have transitioned quite actively from semiconductor design, into the IoT arena and more centrally focused on software-focused microservices.  However, to my disappointment, and  from what I can tell, not much has changed in the world of semiconductor design in the last 5 years.

Today, a lot of what we hear about in software design and emerging technologies is Machine Learning (ML), Artificial Intelligence (AI), data analytics, and IoT applications across all sectors.  I personally find each of these trends in software engineering extremely intriguing, and as an old school chip designer, I invariably find myself thinking about why the semiconductor industry continues to find itself stuck on an island of stagnation surrounding some of these extremely exciting trends.

Based solely upon the exhobidant cost of manufacturing semiconductors, designing an integrated circuit according to some “probable” outcome is an extremely risky venture.  Perhaps there is no applicability for AI or ML in what is a precise and structured process for hardware design.  At the very least, Intelligent Automation (IA) may be an area where further design process workfow enhancements could be forthcoming.

Clearly, Electronic Design Automation (EDA) software falls into a category of IA that extends later into the chip design workflow, than the early specification & design phases.

More to come…

Autoregressive modeling for capacity planning in High Performance Computing Workflow-as-a-Service (WFaaS) clouds…

As an undergraduate and graduate school student, I spent the majority of my time studying digital signal processing algorithms. As a result, over my engineering career, I often find myself couching problems in the context of whether a good filtering algorithm can be of use. I hate getting rusty using my Fast Fourier Transform (FFT) skills! 🙂  However, in this blog, I think that pulling out my Linear Predictive Coding & Autoregressive modeling past time may prove to be an interesting topic for the discussion of capacity planning for cloud computing service models.

I’m now moving into the next phase of truly understanding the computational model and behavior for High Performance Computing (HPC) Workflow-as-a-Service (WFaaS) architectures.  I think that this is an interesting problem, because if we can understand the dynamics of HPC WFaaS models, then we should be able to optimize cloud capacity planning, especially for oversubscription events. In other words, what I would like to know is an a priori knowledge of exactly what computational resources will be needed based upon a posteriori events, i.e. we seek to minimize the “error” between estimated needed processing resources and instantaneous processing demand. This is a classic forward linear prediction problem. Aaaah, filtering! YESSSS!!

Not much I can add to this!

This blog will take some time in developing the premise, but if you stay with it, I think we’ll reach some interesting conclusions.

So, here goes…

Traffic/packet patterns in IP/LAN networks & processing demand workloads?

Over the past fifteen (15) years, traffic patterns in IP & LAN networks has been well researched. The most recent research had to extend into the characterization of Voice-over-IP and Video-over-IP services, i.e. isochronous traffic patterns. The question that I pose here is this: do processing demands for HPC WFaaS models have any statistical correlations to various types of data communication network traffic patterns?

Since data communication traffic is generated from software applications running on a processor somewhere in a network, it seems intuitive that within the context of certain classes of software applications, there must be a correlation between network traffic and software processing workloads. Where these correlations exist, applications of autoregressive models used in optimizing network traffic, should be applicable to optimal capacity planning for the processing demands for HPC WFaaS architectures. As of the writing of this blog post, investigation of these correlations between aggregated WFaaS software application processing demand and the burst traffic patterns in data communications is the operative research in question.

Examination of processing demand patterns in an HPC WFaaS

Complex engineering workflows that are dominated by computer-aided software applications often have processing demand patterns that are dependent upon the type of software application that is being used at any specific phase of a workflow. This is particularly true of an HPC semiconductor design-to-release-manufacturing WFaaS. As HPC application requests are often defined through long processing durations and large memory footprints, it makes sense to distinguish an HPC request from what could be considered a “transactional” request that is representative of highly interactive processing dialogue between a user and a host system.

Let’s assume that a workflow consists of five (5) separate software applications: A, B, C, D, and E. Further, let’s assume that we assign four (4) operational performance metrics for each application, namely, (1) memory footprint (including paging), (2) processing duration, (3) I/O (storage & network) activity, and (4) processing profile (floating/fixed point, database, algorithmic, etc.). Each HPC application request could then be assigned an “application weight,” e.g. Ax, Bx, Cx, Dx, Ex, where x is a positive rank derived from the application’s four operational performance metrics. If each of the five (5) software applications exhibit distinct operational profiles based upon the four (4) performance metrics, then as more users are serviced by the WFaaS, we can begin to visualize an aggregated stream of processing demand as defined by application performance metrics.

Let’s stay with the data communication reference model for a moment. We can think of a computing cloud of Virtual Machines (VM) as a buffer of fixed resources or credits. In round robin/fair queuing algorithms, buffer credit schemes can “filter” or shape traffic patterns in synchronous hardware bus protocols. The PCI Express (PCIe) standard is one such example. As VMs within a cloud can be randomly released and allocated, for the purpose of abstracting this problem, it doesn’t matter if the resource buffer is configured as FIFO or LIFO. As VMs are allocated and released, the processor buffer resources increase or decrease dynamically, in exactly the same manner as tokens or credits are allocated and spent in a PCIe system architecture.

(1) Let’s now aggregate the individual streams of processing demand requests from a pool of HPC user requests into a singular time series of requests to be serviced. As a second alternative, we could (2) aggregate HPC requests in exactly the same manner as the time series of the S&P500, which is an aggregate time series formed from the sum of the 500 individual stocks that comprise the S&P500, i.e. at any time sample, sum or translate all HPC requests into a single measurement. I’m still thinking about the underlying spectral implications between these two time series definitions.

Deviations from normal capacity loads for detection of Denial-of-Service (DoS) attacks?

So this brings up another topic relevant to the concept of a “resilient cloud”.

  • Can we use our autoregressive models to detect anomalous or deviant workload patterns masked DoS attacks?
  • What parameters or metrics passed between Virtual Machines (VMs) in a computing cloud could be used as part of a consensus algorithm to create alerts for suspected DoS attacks?

More to come…

Autoregressive model and forward linear prediction

Overview of computation of forward linear prediction taps.

M/M/1 Queues

Standard queue applicability to WFaaS workloads.

References:

Notes on Digital Signal Processing: Practical Recipes for Design, Analysis, and Implementation, C. Britton Rorabaugh, Prentice Hall, 2010, ISBN-10: 0-13-158334-4.

Methods and Applications of Statistics in Business, Finance, and Management Science, N. Balakrishnan, John Wiley & Sons, 2010, ISBN: 978-0-470-40510-9.

“A technology looking for a problem” Can we apply search concepts to “rank” Semiconductor Intellectual Property?

It's all about attitude, stubbornness, and establishing new paradigms

While much of the appreciation for this blog topic is restricted to semiconductor component design, by the end of the story, I hope to bring this back into the context of generalized intellectual property distribution in cloud computing enterprise architectures.

Those of us who have been in R&D have either heard of or designed “technologies or solutions that are looking for a problem.” Those of us that have been entrepreneurs also know that “solutions looking for a problem don’t get funded.” (http://blog.startupprofessionals.com/2010/11/solutions-looking-for-problem-dont-get.html) However, I was only a couple of hours into my latest audio book, “In The Plex: How Google Thinks, Works, and Shapes Our Lives”, by Steven Levy, when I was continually reminded of just how many indirect network effects have been spawned out of Google’s obsession with the problem of “SEARCH”.

Solutions looking for problems!

What could certainly have been at first glance classified by most computer scientists as a “bounded problem”, Larry Page & Sergey Brin have managed to not only to profitably legitimize the problem of “search” for interesting web pages, but have managed to somehow indirectly integrate it into practically every facet of our lives! Okay, that last part is stretching it a bit, but sometimes it does seem like Google is everywhere.

Our past experiences in life always jade our thinking and perceptions, and my life as a chip designer certainly influences my “mind broadening” diatribes as well. The problem I’m challenging those reading this blog to think about is intertwined into my previous blog on Semiconductor Intellectual Property (SIP) in “the Cloud”, aka, “Snakes On A Plane!”

Present day semiconductor component architectures, Systems-on-Chip (SoCs), rely heavily upon previously designed and verified functional modules, often obtained from 3rd parties in exchange for licensing or royalty fees. When a single SIP design flaw could mean the demise of a chip company’s reputation or business, the fear that chip designers and architects have when integrating a 3rd party SIP module goes viral. The problem that the wary chip engineer has when assessing the worth of any SIP module is effectively a SEARCH problem!

That’s right, I said it, it’s a search problem! The engineer is searching for the answer to the question of whether that SIP block is worthy to be integrated on the chip, i.e. if there are bugs in the SIP, where are they and how bad can they be? What are the risks? The focus of this blog entry is, can we assign a “rank” to an SIP module in a similar or tangentially related manner to the ranking of Web pages, i.e. and SIPRank? To an experienced design engineer, such an attempt to design an automaton to quantify the value of a digital function may seem absurd, and I fully appreciate that healthy skepticism. And while initial attempts at defining SIPRanks will be crude, over time, just as the exponential expansion of Google’s web page database continued to refine, test, challenge, and improve Google’s search algorithms and subsequent results, an increasingly large data sampling of SIP will provide a similar proxy for defining and effective SIPRank algorithm.

More to come…

Self-serve frozen yogurt or Th’ Cloud…the chicken or the egg? Which came first or strange birds of a feather?

Yummy...help yourself to EXACTLY how much you desire!

Okay…based on the title of this blog, I’ll have to admit it, I may be spending way too much time thinking about cloud computing! But, my motivations are outweighing my logic, so here goes.

My wife, Judy, loves frozen yogurt. So much so that in her early entrepreneurial days she opened up a frozen yogurt store in west Boca Raton. Much to her chagrin, after moving to North Carolina, she found that frozen yogurt-mania was not quite so popular. Being few and far between, meant that finding a frozen yogurt spot prompted an immediate “pull over”. So you can imagine her delight in seeing the recent outbreak of self-serve frozen yogurt stores spring up around our home town of Cary, NC. For the past 9 months, I’ve become quite a “taste cup” connoisseur of the local frozen yogurt cuisine.

One day this week, after lunch, Judy and I stopped by her favorite self-serve, frozen yogurt oasis. As I spilled over (accidentally) my taste cup, I began thinking about what had changed in the frozen yogurt business from Judy’s days in Boca. We began talking about her previous service models, comparing it to the new model of one person at the cash register with a line of customers serving themselves! No more fixed size pricing, no separately priced toppings, no dedicated serving staff, and no crazy over-the-counter hand-offs! “It is absolutely the way to go!”, she told me. “Why didn’t I think of it?”

I then told her that this demand-based, self-serve model for frozen yogurt is exactly what is happening in world of cloud computing. My question to her was which business model came to this realization first, cloud computing or frozen yogurt? With more of a disgusted than quizzical expression, my wife said, “Don’t you think you’re taking this cloud thing a bit far?” “Not really.” I retorted. It didn’t take much analogy work to final convince her that these two business models are birds of a feather. She does get it. Did I mention that Judy was with Cisco’s Inside Sales team for a number of years, is an electrical engineer, and an MBA?

Kind of like my absolute favorite..."Chocolate Cake Batter"

By the way, for all of those ISVs out there that cringe at the thought of Software-as-a-Service (SaaS), the self-serve frozen yogurt business is healthier than ever! And did I forget to mention, they’re making more money selling it this way than they ever did before!

“Who do you trust?” Hubba, hubba, hubba…what can cloud do for you in trusted supply chains?

Trust in Supply Chain Management – Threats beyond the US Department of Defense

I know it has been quite awhile since the first Batman movie where Jack Nicholson starred as The Joker. However, I cannot help but think of Nicholson’s Joker when I think about Trusted Supply Chains when he said, “…And now, folks, it’s time for “Who do you trust!” Hubba, hubba, hubba! Money, money, money! Who do you trust? Me? I’m giving away free money. And where is the Batman? HE’S AT HOME WASHING HIS TIGHTS! “

My first take on the topic of supply chains is that it must be an abysmally dry topic, particularly in the context of the “white hot” world of cloud computing. At least that is what I believed until this week. With late notice last week, I had the distinct pleasure as an invited panelist at the inaugural Critical Technologies Conference that was held this week at NAVSEA in Crane, IN.  The panel was about the threat to national security from DIS-trusted supply chains infiltrating their way into DoD platforms of all types. The conference was held in a quaint and picturesque setting at NAVSEA’s Crane Division in southern Indiana at the base’s club house situated on Lake Greenwood. Trust me when I tell you that I wasn’t the only attendee that was happy to have had the conference shuttle from Bloomington!

While the conference was limited to about one hundred attendees, the presentations and information were more than simply intriguing to the selective gallery of onlookers. The content was sobering and in some ways startling. While the focus of the conference was on interests for the US Department of Defense (DoD), the epiphany that hit me square between the eyes is the very nasty impact that globalization is reeking on the supply chains of legitimate commercial interests. In this blog, I want to talk about Trusted Supply Chains and the positive impact that cloud computing architectures can have on what should be considered the lifeblood of a product, it’s supply chain.

I’m an electrical engineer by training, not an industrial engineer, so I haven’t given a great deal of thought into the subject of supply chains until my MBA days. That’s the funny thing about MBA training for engineers, it actually forces you get your head out of the calculus book and think about why you need calculus anyway. But back to the subject matter at hand. I’ve blogged ad nauseam about Christensen’s Value Chain Evolution (VCE) Theory. I’m afraid I have to do so again, because it is precisely VCE Theory that is responsible for the disaggregation in complex supply chains.

The interesting thing about VCE Theory in supply chain disaggregation is that it seems impervious to the granularity of the inflection point. It doesn’t seem to matter whether the inflection point in the supply chain is outsourcing diesel locomotive engines, steel factories, a few cleverly connected transistors on an integrated circuit, or a few lines of software code in a middleware module. The one thing that is discriminatory in VCE-affected supply chains is the “digital content ratio” exhibited by the supply chain. The more the supply chain is driven by information content that is inherently digital, the faster the absorption rate of VCE inflection points and the more subject the supply chain is to disaggregation.

Disaggregated supply chains can have very positive effects on extracting cost efficiencies in products. As with any pro, there is a con, and in this case the cost efficiency comes at the price of TRUST! Why do I think of the opening quote by The Joker? Because as a systems integrator when it comes to Supply Chain Risk Management (SCRM), who are you going to trust while saving money in assembling products? “…hubba, hubba, hubba! Money, money, money! Who do you trust?” These two issues, trust and money, go hand in hand in disaggregated supply chains.

“Hell isn’t merely paved with good intentions, it is walled and roofed with them” Aldous Huxley

Engineers, Supply Chains, Business Capital, Free Markets, and Good Intentions

I am no advocate for commercial isolationism, nor am I a proponent for a global free trade zone. However, we must be willing to take a candid look at the reality of industrial and commercial policies with enough open mindedness to step back when we see those policies start backfiring. With that said, I am also a dog lover, and dogs do the darnedest things, almost always with an intent to please their masters. The same great intentions can be aspired to the creators of disaggregated, VCE-shaped supply chains. However, just as our furry friend in the illustration shows, past habits that we learned that have previously pleased our masters, may now end up with unexpected and sometimes undesirable results!

The evolution of the disaggregated inflection points in a supply chain occur because there is no longer a differentiated advantage to maintaining that point in the supply chain vertically within the company. The value of outsourcing the service or part has become “good enough” and can be competitively sought after on the open market. The operative words “good enough” and “open market” impose a Supply Chain Management (SCM) responsibility on the consumption side of the supply chain, and the following questions must be considered:

  1. Is the outsource service truly “good enough”, i.e. will using the outsourced version of the component degrade the overall quality of the final product and ultimately the company’s reputation?
    • How will we maintain an ongoing quality control program in collaboration with the supplier?
  2. Does the supplier operate a sustainable business model?
    • Will they be in business as long as I need them to be?
    • Do we have acceptable alternative supply sources in our contingency plans?
  3. Is the supplier “trusted”?
    • Is the supplier honest, reputable?
    • What indemnification rights protect our company from transacting with this supplier?
    • Are the parts/service that I am receiving counterfeit?
    • Is the supplier operating under nefarious auspices?
  4. What about the distribution channels of the supplier?
    • What international tariffs or laws must be considered to integrate the supplier into our manufacturing processes?
    • How can we implement Just-In-Time manufacturing process efficiencies?

“Executive Decisions”…unexpected consequences from honest intentions…

Hard choices always start at the top

The electronics and software supply chains are characterized through high digital content ratios. These two supply chains are also very synergistic and have complex, interrelated dependencies. We are now observing practices in foreign markets that have degraded the integrity and trust of the electronics and software supply chains. The first and foremost practice is a market of counterfeit electronic parts. The second is digital content Intellectual Property (IP) theft that once compromised and disseminated, the control and recovery of that IP is effectively lost forever.

At the Critical Technologies Conference, the Supply Chain panel featured Tom Sharpe, CEO from SMT Corporation. SMT Corporation specializes in trusted electronics supply chain management. What I learned from Mr. Sharpe’s presentation is that knowing that counterfeit electronics exist and understanding the magnitude and prevalence of the problem are entirely two different things. A comment from Mr. Sharpe that strikes at the heart of the problem was this, “I guarantee you that if you have more than a few consumer electronics products in your home, there are counterfeit parts in one of those products!”

To most of us that statement does not translate into a serious threat, until one of the products that has been compromised with counterfeit electronics fails, malfunctions, or worse creates a hazard. In most cases, the product seems to work fine, but the unseen crime at work takes money out of the rightful pockets of the original manufacturers. Crime! Did I say crime! Yes, I did. When we examine the counterfeit electronics supply chain, what we find is that it is not all that hidden from view. In fact, we know a great deal about every step of the process. Most noteworthy is the cultural mindset that those engaged in the counterfeit electronics market do not associate any criminality in the activity, but in many respects see their market as a “green initiative” in recycling what is considered waste products.

The RnD cost that goes into designing, qualifying, testing, and supporting electronic components is staggering. Specification of the dozens of parameters for a specific component that are characterized by the component’s packaging labels are negated when counterfeiters re-mark the component with a higher performance designation. The situation becomes even more egregious when completely different components that have the same physical package are mislabeled. The fraudulent device is inserted into the product. The product test fails, gets ejected in manufacturing to a quality assurance line, and the counterfeit manufacturing cycle begins anew.

The situation becomes much more arduous when the malfeasance penetrates the silicon die in terms of digital content controlled intellectual property. This process of compromise in a trusted supply chain can be likened to an organ transplant and never knowing or finding the transplant recipient. In electronic component design & manufacturing, digital IP references the actual functional specification of a particular capability the component exhibits or delivers. That IP will undergo a series of content transformations that lead to a final manufacturing description. At each transformation stage the IP is in a human or machine readable digital format that if stolen or compromised can be used to complete the manufacturing process.

The problem becomes intractable once the IP is removed from its intended application or component and transferred into a completely different component architecture. The stolen IP simply becomes another functional element of a black box system that is practically impossible to detect, akin to someone disappearing into a large crowd of people. The IP, along with the component that is now contaminated with stolen IP, becomes part of the landscape of a black market supply chain. While not identical in process, in context, the problem of theft and compromise of software IP may be easier to execute, but just as hard to recover from after the theft has occurred.

Tony Bent, Business Operations Director for National Semiconductor’s trusted foundry operations represented on the same Supply Chain panel, talked about the additional supply chain controls that National is manufacturing into their components to make it easier to detect counterfeit components earlier in the supply chain cycle. Tony also mentioned the brazenness exhibited by counterfeiters. He discussed situations in which National engineers have received calls from overseas persons who ask why certain transistors are part of a particular circuit, clearly with an aim at replicating the silicon die for counterfeiting. The fact that the counterfeiting party has no reservation about contacting the original manufacturer borders on obscene!

Adding the “Trusted” cloud computing component into a disaggregated supply chain!

The issue of IT intellectual property rights, security, privacy, and protection is without dispute or question the single highest calling and priority for cloud computing service providers! There can be no effort minimized, no corners taken, no expense incurred in fulfilling this mission statement for a cloud computing service providers. Simply stated, without total IT intellectual property control and security, a cloud computing service provider does not have a service to sell.

Information control and management is a crucial component in SCM. While the information content associated with SCM is not in a strict sense technical or engineering intellectual property controlled through patents and copyrights, it is valuable to the company’s operations and must be considered a form of intellectual property and treated with appropriate security concerns. When a supply chain is vertically integrated within a single organization or company, control and management of SCM data is a simpler task. Disaggregate and distribute that SCM database across dozens of independent, international ongoing concerns, and SCM data management flies out the window. If the information flow across a supply chain could be likened to the glue that holds the flow of components across the supply chain, then SECURE, COLLABORATIVE information flow across that supply chain could be considered SUPER GLUE for the flow of components.

I suspect by now that you’ve probably surmised where I am going this with this…enter the disruptive world of collaborative cloud computing enterprise architectures. I personally love the application of a disruptive technology that solves a VCE-driven disruption.

True disruptions always look funny at first glance and often end up as mainstream solutions.

For most companies, the transition to a cloud architecture in itself is scary enough, much less a public cloud. Therefore, the initial focus of my comments here will be on “community cloud computing” architectures, i.e. a collaborative, demand-based, multi-tenant, utility computing model designed to serve the interests of a specific community of users. The concepts stated here are just as applicable to public clouds. The key concept to retain from the previous statement is that the multi-tenancy is the critical component in a community cloud service that binds collaboration with disaggregation.

Now let’s apply the vision a supply chain workflow implemented as a secure, collaborative, cloud computing enterprise architecture.

Identity Profiles – Trust component for Cloud-based Supply Chain Management

“Identity in the Age of Cloud Computing”, J.D. Lasica, Communications and Society Program

“Technology enables companies to build and tear apart alliances and partnerships on an as-needed basis. Product decisions are becoming less dependent upon a fixed list of suppliers than on the range of suppliers available. Relationships come together based on a particular product or project and then disband at the end.”

“The beginnings of this move toward specialization is already on display in certain global supply chains, where workers in disparate venues focus on one aspect of the manufacturing process. For instance, eighteen companies were involved in developing and manufacturing the first Apple iPod.”

“Improved global coordination allows companies that have found unique ways to deliver products in certain markets to go global with those advantages.”

More to come…

Cloud computing as an “enterprise architecture” – Depending on your perspective…chameleon or kaleidoscope?

One of the key paradigm shifts that has been an indirect network effect of cloud computing is the perspective of the enterprise architecture represented by a cloud computing service provider. In this blog, I want to outline my thoughts on a variety of enterprise architecture “core diagrams” that should be examined depending upon the use case model for the cloud service.

Cloud computing enterprise architectures...beautiful, complex, and can change quickly...pick any two!

 

A reference framework for enterprise architecture from The Art of Enterprise Information Architecture – A Systems-based Approach for Unlocking Business Insight, M. Godinez, E. Hechler, K. Koenig, S. Lockwood, M. Oberhofer, M. Schroeck, IBM Press, 2010.

A Reference Enterprise Architecture Framework

Operating Perspective Affects How Your Enterprise Architecture Evolves

Cloud computing collaborates supply chain stakeholders - Enterprise Architecture adaptation is important

More to come…

Stop the presses! News for ISVs…cloud computing is a service business. Mischief is afoot!

As the campaign for American independence emerged from the triumph at Trenton and entered into the new year of 1777, King George III expressed his dismay over the recent events in America and said to parliament,

“If this treason be suffered to take root, much mischief must grow from it.”

Regarding the impact of the Battle of Trenton, Sir George Otto Trevelyan noted:

“It may be doubted whether so small a number of men ever employed so short a space of time with greater or more lasting results upon the history of the world.”

Cloud computing - treasonous to some, liberating to others

Indeed, much mischief has been the result of the American Revolution and the events of December 24th, 1776. But, I am not writing here to give you a lesson in American history, but to explore corollaries in history that cavort technological nuances in the software industry.

To some in the software industry, they see a new mischief afoot, treasonous in nature, it’s known as cloud computing. And in so short a span of time, cloud computing’s technological tsunami that we have witnessed through 2010, may have the greatest impact of all the business changes the Internet has thrust upon us thus far.

Most software executives recoil in terror or admit with shame service operations within their companies. I refer you to one of my recommended readings, The Business of Software, by Michael Cusumano. Published in 2004, The Business of Software may seem a little dated for the age of cloud computing, but the message Professor Cusumano strives to convey has an even greater importance because of the Software-as-a-Service (SaaS) model enacted by cloud computing services.

This blog entry follows up on a previous blog entry regarding how ISVs view cloud computing, so it may seem that this blog entry is essentially the same information. What I hope to convey in this blog is why the cloud computing services model can be embraced by software companies. This blog entry is more philosophical in nature than quantitative, more introspective than overt.

Why is it important for software companies to embrace the service aspects of their businesses? Pithily speaking, if they don’t, they will suffer to the benefit of those that do.

In a previous blog (from constraints to opportunity), I discussed the transformational effects, both economic and operational, that come from demand-based computing service models, particularly when applied to software-driven workflows. It is this psychology that changes the software use model in cloud computing from disconnected, task-oriented applications, to a productivity enhancing WorkFlow-as-a-Service (WFaaS) behaviorism.

During a conversation about cloud services at a conference this week, a director for a large defense contractor responded that when he thought of services, he thinks of how many headcount and the correspondingly low consulting margins that accompany services. It was hard for him as an engineering manager to grasp that cloud computing could be considered a service model. Which begs the question, what is the definition of service?

I content, perhaps controversially, that all software should be considered a service delivered in the form of an automaton. After all, aren’t software applications written to provide a user experience that offloads labor, whether that labor is expended in the form of entertainment or a business process? Isn’t a traditional consulting service actually labor applied to software applications that haven’t yet been assembled or automated into a workflow? In this context, I postulate that software can be considered a form of service with higher gross margins and that software development is about solving a service problem.

People and software - both perform a service

If you are of the mindset to accept cloud computing as a service offering, then you have to accept at least in theory the that the underlying software that delivers the cloud workflow constitutes a service as well.

Cloud Computing’s Subtle Infectiousness

The internal corporate battles that raged at Microsoft over both the importance of an Internet strategy and the threat of open source software have been expounded upon in various business strategy books. What we learn from both of those threats to Microsoft’s competitiveness is that given enough cash on the balance sheet, any company can overcome even the largest strategic blunders. Unfortunately (or fortunately, depending on your point of view), there is only one Microsoft and then, as the late Paul Harvey always said, “…the rest of the story.” But Microsoft has learned from it’s past mistakes and has made a home for itself in cloud computing.

I agree with Dennis Byron’s assessment regarding the indistinguishable markets of enterprise software, cloud computing, SaaS, and open source. However, Byron didn’t go far enough to characterize how the business models of each of those sectors affect their interrelated and respective markets. In this sense, I also think that force.com has got it right in building software frameworks that enable software applications to be deployed into cloud-based workflows.

(http://byrondennis.typepad.com/it_investment_research/2010/09/enterprise-software-cloud-saas-and-open-source-not-separate-markets.html).

Notes:

According to Gartner, the 2010 enterprise software market was $232B (http://www.gartner.com/it/page.jsp?id=1437613), with a projected market of $247B for 2011.

Mash-ups as an example of a workflow.

Microsoft Office as a workflow example!

More to come…

Can we link cloud security requirements and implementation patterns through executable Model Driven Architecture (MDA) formalisms?

 

Brave new world?

The adoption of cloud services by a target market is directly correlated to a Service Provider’s (SP) ability to prove the security of the service’s architecture.

This blog will examine cloud computing security design pattern specification using SecureUML and ComponentUML leveraging UML’s Object Constraint Language (OCL).  The objective in this discussion is to begin formalizing a generalized WorkFlow-as-a-Service (WFaaS) cloud security design pattern that can be specified through formal property definitions.  I then examine whether the final implementation deployment can be AUTOMATICALLY analyzed against the original formal property specifications.

I can think of no industrial task more financially daunting, technically challenging, or process intensive as the design of a semiconductor component. During my RnD activities as a semiconductor component design & verification engineer, I was an early proponent, advocate, technical evangelist, and inventor of computer-aided automation methodologies for integrated circuit design. As I have described in previous blogs, there is no room for 2nd or 3rd pass manufacturing re-spins in chip design. Similarly, when there is a breach in an information security model, there will be no 2nd or 3rd chance to recover unauthorized data transfers, ala, WikiLeaks.

Because chip design is a high stakes “pay to play” game, accurate system models that represent the chip’s functional behavior are imperative. The same holds true for cloud computing security models. In fact, security breaches in multi-tenant cloud computing architectures represent an even greater risk to service providers (SPs) than traditional proprietary private corporate IT security escapes. The SP’s liability and exposure is multiplied by the number of separate corporate tenants in their respective cloud service. If we are able to execute a system model under stress conditions that exceed operational demands, then our ability to capture architectural design escapes increases, while the quality of the design is optimized.  Any model that we can simulate under a computer executable program with random, observable, and controllable constraints will reveal architectural defects that we could not anticipate through purely deductive reasoning or analysis.  It is for these reasons UML system models that are linked to executable OCL constraints have such high value in analyzing the effectiveness of a cloud computing security architecture.

Service Oriented Architecture (SOA) defines a collaboration network of services that constitute a “workflow”. When multiple organizations are engaged in supply chain activities, an inter-organizational workflow emerges that requires a company to couple its internal workflows to those of its partners. This does not sound all that different than crystallizing a cloud SaaS/PaaS offering into a WorkFlow-as-a-Service (WFaaS). A branch of research that I have been quite heavily engaged in is Model Driven Security Engineering. At the University of Innsbruck in Austria, there is a robust, innovative, and active research group that has defined a model driven security engineering methodology for SOA.

As a chip designer, I spent endless hours creating deterministic and constrained random test benches that would emulate real world behaviors that would be experienced by the chip. Unfortunately, for an SoC of any serious magnitude, it is virtually impossible to create a simulated environment that covers EVERY operational condition. By establishing functional coverage metrics, we can track the discrete simulation events that cross the stateful system paths we have defined in our requirements. We can only hope that our functional coverage metrics accurately define the operational states of the component. What we really seek to leverage is an area of verification that we call “formal verification”. That is the ability to mathematically prove the functional correctness of a design WITHOUT the need for endless cycles of discrete event, software simulation.

Having the ability to specify any requirement, such as an information security policy in a target implementation and then separately in a target independent abstraction, and subsequently test the two specifications for functional equivalence, we can get very close to eliminating human interpretation, manual translations, or gut-check visual inspections of an implementation. This is where the the SECTET model driven security engineering/policy platform can be used to design a cloud computing WFaaS security model that can be formally proven and tested against an architecture’s Web implementation. By leveraging UML’s visual programming model, a more maintainable, understandable, and closed-form architecture can be delivered to the end user. Through target code generation and automated formal model checking, a new security engineering design paradigm can be established.

SECTET Security Engineering Model

This blog entry is based off of work specified in…

  • “Automated Analysis of Security-design Models”, D. Basin, M. Clavel, J. Doser, M. Egea, Information and Software Technology, 2009, Vol 51, pages 815-831.
  • “Model-Driven Security Engineering for Trust Management in SECTET”, M. Alam, R. Breu, M. Hafner.
  • SOA Security, Engineering Security-critical Inter-organizational Applications, M. Hafner, R. Breu, Springer, 2008.

Security requirements for an identity-driven cloud portal

In an earlier blog entry, I expounded upon the disaggregation of the semiconductor supply chain as defined by four (4) distinct stakeholders or identity profiles: (1) Designer, (2) Foundry, (3) IP Provider, and (4) EDA Supplier. Using the SECTET framework, what would a proposed identity profile security architecture look like for a WorkFlow-as-a-Service (WFaaS) cloud computing offering provisioned to service these four identity profiles?

I will outline a non-exhaustive list of identity profile security requirements as applied to semiconductor design-to-release-manufacturing portal.

  1. The portal services a plurality of distinct companies.
  2. Each company will have multiple portal users.
  3. All portal users would necessarily be caste into one of the four (4) identity profiles.
  4. Some companies will have users that would be caste into multiple identity profiles, e.g. a foundry may also be an intellectual property supplier.
    • Should co-identity users have a separate sub-classification?
    • At the very minimum, co-identities must create a unique security profile.
  5. Inter-company relationships will include:
    • Competitors
      • Users within the same identity profile that belong to different companies, would “most likely” fall into the category of competitors.
      • Regardless of caste, to prevent unauthorized access to products, inter-organizational workflows must be restricted between competitive users.
    • Partnerships
    • Supplier-Customer
      • Design class users represent a primary customer for the other three castes.

More to come on this entry…

WFaaS – From “what can” to “what could”…”Designer” class – A transition from constraint to opportunity

 

"Round about the accredited and orderly facts of every science there
  ever floats a sort of dust-cloud of exceptional observations, of
  occurrences minute and irregular and seldom met with, which it always
  proves more easy to ignore than to attend to... Anyone will renovate his
  science who will steadily look after the irregular phenomena, and when
  science is renewed, its new formulas often have more of the voice of the
  exceptions in them than of what were supposed to be the rules."
    - William James

 

 

This will NEVER work!

We’ve heard the reference to certain people as being a “Negative Nelly”, i.e. a person who always sees the glass as half empty, as opposed to the eternal optimist who always sees the glass as half full.  By most people’s standards, engineers would fall into the Negative Nelly category.  Engineers are paid to design to specifications that consider the worst case scenario.  In fact, if engineers were not trained to think and design in this manner, I’m not sure as many of us would be that enthused about flying commercial airliners. Sales and marketing teams would tend to tilt on the eternal optimist side of the equation, and we also tend to like this characteristic in these individuals as well, except of course, when their optimism lands the CFO on the wrong side of a Sarbanes-Oxley audit. Overall though, given ethical, moral, and competent executive corporate management, these two extremes in corporate cultures seem to balance each other out.

This blog entry’s title phrase, “what can” identifies a Designer mindset that says, “…this is what we can do with the resources that we presently have.”  It is an engineering equivalent of saying that our glass is half empty. The corresponding phrase “what could” identifies a mindset that says, “…IF we had access to this or that, with no restrictions on our resources, then this is what we could accomplish.” In essence, the Designer/engineering equivalent that says our glass is half full.

In my previous analysis, using Value Chain Evolution Theory as applied of the semiconductor supply chain, I identified four (4) disruptive innovations that overlaid on four (4) supply chain stakeholder “identity profiles”. One of those identity profiles was the “Designer” class identity profile.  The Designer class identity profile is applicable to many engineering or technology related supply chains, not just the semiconductor supply chain.

I profile Designer class cloud computing users by the following characteristics…

  1. Is in a Research & Development role.
  2. Active in product development (1) architecture, (2) engineering, (3) implementation, (4) verification, (5) quality control or (6) support.
  3. Generates product-related intellectual property that is part and parcel to the core business operations of the company.
  4. Work methodologies are engineering centric.
  5. Uses computing-aided design methodologies and/or applications in some form for execution of their daily activities.

I’ll probably expand upon these definitions as I continue to think about it, but I am seeking a generalized use case profile for Designers that can be ubiquitously applied across many RnD disciplines.

This blog entry will focus primarily on the following indirect network effects of WorkFlow-as-a-Service (WFaaS) cloud services have upon the Designer class cloud identity profile, namely this…

  • Is there a change in Designer class operational behaviors when SW application laden workflows are provisioned through demand-based utility computing services, i.e. WFaaS enterprise architectures?
  • How are Designers “constrained” by present software licensing models, and why do WorkFlows-as-a-Service represent “opportunity” for Designer class utility computing users?
  • What impact do these WFaaS-induced behavioral changes have upon Software-as-a-Service (SaaS) revenue models?
  • Can such WFaaS operational changes be quantified in a constraint model that can then define the opportunity cost benefit of WFaaS cloud services?
    • I propose here the use of SysML (systems engineering) parametric models that constrain AND integrate BOTH architectural performance features AND financial & marketing metrics to create a quantitative model to be “solved” under constrained parameters.
      • Such models can then be visualized and iterated through a suite of parameter ranges.

Let”s walk through the early budgeting and scheduling cycle for a semiconductor chip project. The use of computer-aided design software and processing resources is fundamental to the project execution and must be addressed at the inception of the project management.  Electronic Design Automation (EDA) software applications are among the most expensive software licenses that can be purchased, where a single license can list as high as $1M. If you are a small/Tier 2 chip design company that is not considered an “enterprise” customer for an EDA company, discounts off of these list prices will be slim.

The chip project manager must begin the scheduling & budgeting analysis by first determining what percentage of the project budget can be assigned to EDA software.

Let’s stop right here!

Why is the schedule involved in this decision? Because the amount of EDA software you can afford will have a direct impact upon how fast you can get your chip completed.  If the chip team could only afford ONE (1) simulation license, they would NEVER be able to run enough simulation cycles to verify the chip’s functional behavior. If the project purchases too many licenses needed for a particular phase of the chip design, then a lot of wasted money has just been flushed. If a company could only afford the base logic synthesis license and unable to afford a physical synthesis tool, then the designers would spend ten times as much time trying to optimize the chip using older technology methods. The point here is that the volume and sophistication of EDA software that your company can afford will become a determining factor in HOW FAST and WITH WHAT QUALITY your design team will be able to get the chip ready for manufacturing and product deployment.  And remember, without the quality, the chances of a first pass manufacturing false start are extremely high and will usually mean the death of the chip, maybe even the company.

Let’s just pause for a moment here and think about the implications of the project manager having to trade off the ultimate success or failure of the project on how much EDA software the company can afford. During a chip design project the use of EDA software is NOT an option.  The project will use upwards of ten different critical EDA software tool applications in the development of the chip. The use and volume of licenses of EDA software will come at different times in the development process. Therefore, astute project management teams will take the time and effort to try to match the use demand of the software they need to the date in the project schedule that their design teams need the software.

Software licensing management appears more constraining than liberating!

EDA sales teams do not like this type of negotiation. They are seeking to close the maximum amount of revenue to meet their sales quotas. Herein lies the dichotomy that is plaguing this software/design project relationship, namely, what is good for the software company, IS NOT good for the customer. If you are operating your business with this underlying premise, you are not going to have a happy relationship with your customer base. Am I exaggerating this? Go ask any chip project manager how much he loves the annual visit from his assigned EDA sales account manager, and you will find out how satisfied they are with the present status quo.

If there was ever any industry software relationship in more dire need of a Software-as-a-Service (SaaS), demand use model, it is the semiconductor design industry. Oh sure, the EDA companies will often cite their “RE-MIX” policy where they GENEROUSLY allow their customers to exchange their software licenses on some prorated basis, but does anyone actually believe that this is addressing the customer’s FUNDAMENTAL needs? No! Again, ask any chip project manager if this is what he/she had in mind when EDA software companies offered software license re-mixes as their solution to demand-based utility computing? You get my point!

Cars-as-a-Service (CaaS)?? – Opportunity costs in the automobile industry

I’ll reiterate another point in the debate on cloudonomics that I recently read.

The discussion surrounded who car manufacturers are willing to sell their cars to. We think about three (3) profiles of car buyers: (1) consumer sale, (2) rental cars, and (3) taxicabs. Car manufacturers initiated their business models through the sale of their product directly to the end users. Clearly, this business represents the overwhelming percentage of sales. However, car manufacturers do not prohibit the sales of their cars to companies that are in the rental car business. For the most part, car manufacturers actually embrace their rental car customers in strategic alliances for exclusive sale of their cars, going so far as to actually acquire large rental car companies as separate business units of their operations. It is also not strange that some people use a rental car company as a channel for a final domestic sales decision for a particular model by renting the prospective model from the rental company and drive the car around for a weekend to see if they like it. In a similar vane, car manufacturers sell cars to taxi cab companies. Pricing of course is commensurate with the target market.

Unless we live in Manhattan, we do not all drive around in taxicabs all day, because it is simply not an economically viable means of transportation. However, taking a taxicab to the airport makes a lot of sense and the regulatory rates charged by taxicab companies seems to be a business model that profitably keeps the taxicab companies in business. Similarly, neither do we all drive around in rental cars, but rather on extended stays during business travel. Renting a car in these circumstances makes a lot of sense, and a healthy rental car market keeps rates competitive.

The existence of the rental car and taxicab markets represents a transition from CONSTRAINT TO OPPORTUNITY! I contend that the demand use model exhibited by the rental car and taxicab companies is the PERFECT example of opportunity costs that should be considered by the software industry. The car manufacturers simply view these adjacent use models for their products as part of their NATURAL market.  They do not view these markets as anomalous behaviors that must be controlled or repressed by non-free-market strategies.

In this light, neither should software companies view the utility demand pricing models offered through ubiquitous computing services as a market threat that must be controlled through collusive activities among competitors. These changes are nothing more than a live examples of Value Chain Evolution (VCE) theory at work. Software businesses that seek to maximize their revenues will be positively served by adapting to these market forces. The key component in forming an adaptation strategy for cloud computing is to seek OPPORTUNITY NOT CONSTRAINT!

Skeptics immediately contend that the heavy manufacturing business of automobile manufacturing has virtually no resemblance to the creative science of software design.  After all the fixed and marginal cost models between software and automobile manufacturing represent perfect diametrical positions. In addition, such skeptics may contest that VCE Theory does not apply to the software industry. But aren’t software workflows an example of modularity?

It is easy to dismiss VCE Theory and disruption as not applicable to one’s own business sector, and seek to “spread fear in the name of righteousness.” Their attitude should be more closely aligned with Intel’s Andy Grove, i.e. paranoia. Don’t run from change, embrace it! That is precisely what Intel did when the company changed their corporate strategy to focus on the microprocessor market rather than the memory market. Intel’s executives were able to see how the market changes would affect their corporate strategy and subsequently embraced the changing market conditions to the company’s magnificent advantage and success. The same must be true for software companies who must embrace the business and technology changes being ushered in by the cloud computing market.

More to come…