Yep, Kep…it’s all going to be about workflows!

“Well, you know, if it’s enough already, and I just wanna get some sleep.”  Kramer, Seinfeld, “The Mango”

As my son, George, now transitions from 7th to 8th grade and he is on the verge of turning 13, I have now become his personal math tutor (which in itself has become a small battle of two opposing wills).  George thinks I have this really irritating habit of just thinking about too many details!  He’s looking for that one word answer and some memorization trick to sooth his understanding. But, he is not going to get that from me. He is right about one thing, when I sink my teeth and mind into some technical foray, I do not like to leave any stones unturned. Now I don’t want to leave the wrong impression, George is a smart kid, not because he has some superior genes, but because he has parents that run a tight ship with regard to his studies.  I’ve laid my favorite Lombardi quote on him more than he likes to hear, and until I feel that he’s finally “got it” in math, he’ll continue to hear it a lot more.

“The difference between a successful person and others is not a lack of strength, not a lack of knowledge, but rather in a lack of will.”

It kind of reminds me of my grad school days at Georgia Tech and a conversation between me and Professor Monson Hayes at the end of his Advanced Digital Signal Processing course.  I said to Professor Hayes at the time that I thought I had finally turned the corner on the mathematical theory of random processes. He promptly reminded me to not be so quick; he personally didn’t feel he had actually understood probability and random processes until after the 3rd time he had taught the course. This week, I got that same sort of sick, yet gleeful, feeling when another level of light came on in my head about cloud services and workflows.

Clearly, at the risk of dating myself, I still love those old Seinfeld reruns. In a scene of one of my favorite episodes, “The Mango”, in their usual demented manner, Jerry gets on this topic of “faking it”.  While it’s hard for Jerry to get a grasp of Elaine always “faking it” with him, he really gets whacked out when Kramer tells him, that he’s “faked it”!  What? But, why? Kramer says, “Well you know, if it’s enough already, and I just wanna get some sleep.”

"Well, you know, if it's enough already, and I just wanna get some sleep."

Which brings me to my own personal technical demons that I have exorcised over the years, of which I am going to excoriate a couple of in this blog entry. I’m going to first come clean and say outright, that I’ve faked it on cloud workflows!  I know it may have sounded like the real thing a few blogs ago, but without knowing it, I “faked it”.

It’s not that I have been off in the weeds on this subject. In fact, in the case of the importance of workflows in cloud services, I knew where the weeds were that I wanted to trounce around in, I just didn’t know I was strolling in a huge field of dandelions that were 6′ tall!

Workflows, weeds & cloudy skies...beauty is in the eyes of the beholder

This is what has kind of happened to me in my perception of workflows in cloud services. I was really having fun blogging about how design methodology workflows and WorkFlow-as-a-Service (WFaaS) is certain to become a centralized tenant in cloud services.  But, the reality is that I had just decided to “fake” the ending, letting everyone know how great this workflow stuff is, without really EXPERIENCING how great this workflow stuff can be! I just wanted “to get some sleep”, and “it was enough already.”  But it’s not “enough already”, and I’m now ready to REALLY write about what we’re going to see in the field of WorkFlow-as-a-Service (WFaaS) in cloud services…

More on this to come…

"Now, be a good cloud, sit down and eat your workflows! They're good for you!"

WFaaS – From “what can” to “what could”…”Designer” class – A transition from constraint to opportunity

 

"Round about the accredited and orderly facts of every science there
  ever floats a sort of dust-cloud of exceptional observations, of
  occurrences minute and irregular and seldom met with, which it always
  proves more easy to ignore than to attend to... Anyone will renovate his
  science who will steadily look after the irregular phenomena, and when
  science is renewed, its new formulas often have more of the voice of the
  exceptions in them than of what were supposed to be the rules."
    - William James

 

 

This will NEVER work!

We’ve heard the reference to certain people as being a “Negative Nelly”, i.e. a person who always sees the glass as half empty, as opposed to the eternal optimist who always sees the glass as half full.  By most people’s standards, engineers would fall into the Negative Nelly category.  Engineers are paid to design to specifications that consider the worst case scenario.  In fact, if engineers were not trained to think and design in this manner, I’m not sure as many of us would be that enthused about flying commercial airliners. Sales and marketing teams would tend to tilt on the eternal optimist side of the equation, and we also tend to like this characteristic in these individuals as well, except of course, when their optimism lands the CFO on the wrong side of a Sarbanes-Oxley audit. Overall though, given ethical, moral, and competent executive corporate management, these two extremes in corporate cultures seem to balance each other out.

This blog entry’s title phrase, “what can” identifies a Designer mindset that says, “…this is what we can do with the resources that we presently have.”  It is an engineering equivalent of saying that our glass is half empty. The corresponding phrase “what could” identifies a mindset that says, “…IF we had access to this or that, with no restrictions on our resources, then this is what we could accomplish.” In essence, the Designer/engineering equivalent that says our glass is half full.

In my previous analysis, using Value Chain Evolution Theory as applied of the semiconductor supply chain, I identified four (4) disruptive innovations that overlaid on four (4) supply chain stakeholder “identity profiles”. One of those identity profiles was the “Designer” class identity profile.  The Designer class identity profile is applicable to many engineering or technology related supply chains, not just the semiconductor supply chain.

I profile Designer class cloud computing users by the following characteristics…

  1. Is in a Research & Development role.
  2. Active in product development (1) architecture, (2) engineering, (3) implementation, (4) verification, (5) quality control or (6) support.
  3. Generates product-related intellectual property that is part and parcel to the core business operations of the company.
  4. Work methodologies are engineering centric.
  5. Uses computing-aided design methodologies and/or applications in some form for execution of their daily activities.

I’ll probably expand upon these definitions as I continue to think about it, but I am seeking a generalized use case profile for Designers that can be ubiquitously applied across many RnD disciplines.

This blog entry will focus primarily on the following indirect network effects of WorkFlow-as-a-Service (WFaaS) cloud services have upon the Designer class cloud identity profile, namely this…

  • Is there a change in Designer class operational behaviors when SW application laden workflows are provisioned through demand-based utility computing services, i.e. WFaaS enterprise architectures?
  • How are Designers “constrained” by present software licensing models, and why do WorkFlows-as-a-Service represent “opportunity” for Designer class utility computing users?
  • What impact do these WFaaS-induced behavioral changes have upon Software-as-a-Service (SaaS) revenue models?
  • Can such WFaaS operational changes be quantified in a constraint model that can then define the opportunity cost benefit of WFaaS cloud services?
    • I propose here the use of SysML (systems engineering) parametric models that constrain AND integrate BOTH architectural performance features AND financial & marketing metrics to create a quantitative model to be “solved” under constrained parameters.
      • Such models can then be visualized and iterated through a suite of parameter ranges.

Let”s walk through the early budgeting and scheduling cycle for a semiconductor chip project. The use of computer-aided design software and processing resources is fundamental to the project execution and must be addressed at the inception of the project management.  Electronic Design Automation (EDA) software applications are among the most expensive software licenses that can be purchased, where a single license can list as high as $1M. If you are a small/Tier 2 chip design company that is not considered an “enterprise” customer for an EDA company, discounts off of these list prices will be slim.

The chip project manager must begin the scheduling & budgeting analysis by first determining what percentage of the project budget can be assigned to EDA software.

Let’s stop right here!

Why is the schedule involved in this decision? Because the amount of EDA software you can afford will have a direct impact upon how fast you can get your chip completed.  If the chip team could only afford ONE (1) simulation license, they would NEVER be able to run enough simulation cycles to verify the chip’s functional behavior. If the project purchases too many licenses needed for a particular phase of the chip design, then a lot of wasted money has just been flushed. If a company could only afford the base logic synthesis license and unable to afford a physical synthesis tool, then the designers would spend ten times as much time trying to optimize the chip using older technology methods. The point here is that the volume and sophistication of EDA software that your company can afford will become a determining factor in HOW FAST and WITH WHAT QUALITY your design team will be able to get the chip ready for manufacturing and product deployment.  And remember, without the quality, the chances of a first pass manufacturing false start are extremely high and will usually mean the death of the chip, maybe even the company.

Let’s just pause for a moment here and think about the implications of the project manager having to trade off the ultimate success or failure of the project on how much EDA software the company can afford. During a chip design project the use of EDA software is NOT an option.  The project will use upwards of ten different critical EDA software tool applications in the development of the chip. The use and volume of licenses of EDA software will come at different times in the development process. Therefore, astute project management teams will take the time and effort to try to match the use demand of the software they need to the date in the project schedule that their design teams need the software.

Software licensing management appears more constraining than liberating!

EDA sales teams do not like this type of negotiation. They are seeking to close the maximum amount of revenue to meet their sales quotas. Herein lies the dichotomy that is plaguing this software/design project relationship, namely, what is good for the software company, IS NOT good for the customer. If you are operating your business with this underlying premise, you are not going to have a happy relationship with your customer base. Am I exaggerating this? Go ask any chip project manager how much he loves the annual visit from his assigned EDA sales account manager, and you will find out how satisfied they are with the present status quo.

If there was ever any industry software relationship in more dire need of a Software-as-a-Service (SaaS), demand use model, it is the semiconductor design industry. Oh sure, the EDA companies will often cite their “RE-MIX” policy where they GENEROUSLY allow their customers to exchange their software licenses on some prorated basis, but does anyone actually believe that this is addressing the customer’s FUNDAMENTAL needs? No! Again, ask any chip project manager if this is what he/she had in mind when EDA software companies offered software license re-mixes as their solution to demand-based utility computing? You get my point!

Cars-as-a-Service (CaaS)?? – Opportunity costs in the automobile industry

I’ll reiterate another point in the debate on cloudonomics that I recently read.

The discussion surrounded who car manufacturers are willing to sell their cars to. We think about three (3) profiles of car buyers: (1) consumer sale, (2) rental cars, and (3) taxicabs. Car manufacturers initiated their business models through the sale of their product directly to the end users. Clearly, this business represents the overwhelming percentage of sales. However, car manufacturers do not prohibit the sales of their cars to companies that are in the rental car business. For the most part, car manufacturers actually embrace their rental car customers in strategic alliances for exclusive sale of their cars, going so far as to actually acquire large rental car companies as separate business units of their operations. It is also not strange that some people use a rental car company as a channel for a final domestic sales decision for a particular model by renting the prospective model from the rental company and drive the car around for a weekend to see if they like it. In a similar vane, car manufacturers sell cars to taxi cab companies. Pricing of course is commensurate with the target market.

Unless we live in Manhattan, we do not all drive around in taxicabs all day, because it is simply not an economically viable means of transportation. However, taking a taxicab to the airport makes a lot of sense and the regulatory rates charged by taxicab companies seems to be a business model that profitably keeps the taxicab companies in business. Similarly, neither do we all drive around in rental cars, but rather on extended stays during business travel. Renting a car in these circumstances makes a lot of sense, and a healthy rental car market keeps rates competitive.

The existence of the rental car and taxicab markets represents a transition from CONSTRAINT TO OPPORTUNITY! I contend that the demand use model exhibited by the rental car and taxicab companies is the PERFECT example of opportunity costs that should be considered by the software industry. The car manufacturers simply view these adjacent use models for their products as part of their NATURAL market.  They do not view these markets as anomalous behaviors that must be controlled or repressed by non-free-market strategies.

In this light, neither should software companies view the utility demand pricing models offered through ubiquitous computing services as a market threat that must be controlled through collusive activities among competitors. These changes are nothing more than a live examples of Value Chain Evolution (VCE) theory at work. Software businesses that seek to maximize their revenues will be positively served by adapting to these market forces. The key component in forming an adaptation strategy for cloud computing is to seek OPPORTUNITY NOT CONSTRAINT!

Skeptics immediately contend that the heavy manufacturing business of automobile manufacturing has virtually no resemblance to the creative science of software design.  After all the fixed and marginal cost models between software and automobile manufacturing represent perfect diametrical positions. In addition, such skeptics may contest that VCE Theory does not apply to the software industry. But aren’t software workflows an example of modularity?

It is easy to dismiss VCE Theory and disruption as not applicable to one’s own business sector, and seek to “spread fear in the name of righteousness.” Their attitude should be more closely aligned with Intel’s Andy Grove, i.e. paranoia. Don’t run from change, embrace it! That is precisely what Intel did when the company changed their corporate strategy to focus on the microprocessor market rather than the memory market. Intel’s executives were able to see how the market changes would affect their corporate strategy and subsequently embraced the changing market conditions to the company’s magnificent advantage and success. The same must be true for software companies who must embrace the business and technology changes being ushered in by the cloud computing market.

More to come…

WorkFlow-as-a-Service (WFaaS) – Catalyst for design enablement…crystallizing IaaS/PaaS/SaaS cloud enterprise architectures

As a design engineer, I leveraged computer-aided-design in the execution of my part of my daily routine, thinking very little about the concept of utility computing or cloud computing.  From the perspective of a user who has access to all of the computing resources needed to do their job, why would cloud computing be an interesting topic?  Why all the fuss about Infrastructure-as-a-Service (IaaS), Software-as-a-Service (SaaS), and then Platform-as-a-Service (PaaS)?

Entrepreneurial endeavors are in themselves transformative events for the entrepreneur.  While the entrepreneurial experience is often not a net positive outcome for the overwhelming number of attempts, the transformed entrepreneur no longer sees the world through the prism of an employee, but now through the clarity of how does a business make money?  When you begin to examine the operational aspect of a company in this light, just about every aspect of how a company executes it’s mission statement is examined for (1) competitiveness, (2) cost, (3) efficiency, and (4) profitability.

It was through the prism of an entrepreneur that I began examining the grass-roots question of  WHY this cloud computing movement was gaining steam.  I began by delving deep into the use case models for each of the basic cloud services offerings, i.e. IaaS, SaaS, and PaaS.  My only effective vantage point that I could examine the viability of these services was from an RnD perspective, so that is where I begin this analysis.

In this blog entry, I will attempt to help qualify the catalyzing effects of IaaS, PaaS, and SaaS enterprise architectures into higher forms of cloud services, such as WorkFlow-as-a-Service (WFaaS) and Digital-Rights-as-a-Service (DRaaS).

Software workflows – “The whole is more than the sum of its parts.” Aristotle, Metaphysica

Modularity, the Zen of Object Oriented Programming.  The endless trek toward complete software re-usability lies in modularity.  It extends into every fabric and every level of abstraction in the software industry. The Open Source Software movement perhaps exemplifies the essence of modularity in software applications. Software modularity has extreme depth in it’s employment. At the lowest level, software modularity is manifested in functional libraries, e.g. the C++ Standard Template Library (STL) or the Linux operating system.  At the highest levels, software modularity is implemented at the application layer, the most prominent example being Microsoft Office. Software products that is designed for modularity at the application layer are done so with “workflows” in mind.

The Web-based workflows – Mashups?

High Performance Computing (HPC), Manufacturing, & RnD – Drivers Behind Elastic Processing Demand

References:

Toward Corporate IT Standardization Management: Frameworks and Solutions, Robert van Wessel, IGI Global, February 28, 2010

“Standards can result in variety reduction, thereby lowering production costs and creating economies of scale. This refers to the condition where the cost of producing an extra unit of a product decreases as the volume of output increases, in other words the variable costs go down. When the variable costs are low and the fixed costs are high, this may cause a significant entry barrier for competitors, which can prohibit new players from entering the market. When the fixed costs are reduced by an innovation this barrier could be removed. A network externality is a benefit granted to users of such a product by another’s purchase of the product, i.e. every new user in the network increases the value of being connected to that network.”

“Moreover, network effects arise when consumers value compatibility with others, creating economies of scope between different consumers’ purchases. This behaviour often stems from the ability to take advantage of the same features of products and processes. The bandwagon effect occurs when first adopters make a unilateral public commitment to one standard. First adopters of a standard take the highest risk, but they also have the benefit of developing competence early. If others follow the lead they will be compatible at least with the first mover, and potentially with the followers. Bandwagon pressures are caused by the fear of non-adopters appearing different from adopters and possibly performing at a below-average level, if competitors substantially benefit from the standard. So organizations are pressured to adopt standards by the sheer number of adopting organizations in the market even when individual assessments of the merits of standard adoption are unclear. In other words there is a path dependency meaning that decisions by later adopters of a standard depend strongly on those by made by previous adopters. The stronger the network effects, the higher the probability that market mechanisms do not work as they should in selecting superior standards as this is influenced by historical events (e.g. QWERTY keyboard versus Dvorak Simplified Keyboard).”

“Next to these direct effects, so-called indirect network effects are recognized. This is the case when adoption of a standard itself does not offer direct benefits on other users of the standard, but the adoption of the standard might ultimately benefit others. The distinction between direct and indirect refers to the source of benefit to participants in the network, not necessarily to the magnitude of the network effect. For example, greater adoption of Xbox® consoles should generate greater variety in Xbox 360™ game titles. Common adoption would allow producers to achieve scale more easily. Katz and Shapiro (1985) showed how an indirect network effect (i.e. the availability of software to support a hardware standard) made the more popular standard more attractive to future adopters (p. 424). Other consumers or producers are likely to adopt such benefits as well.”

Table 1. Selection criteria for IT Standards
Selection Criteria Weighting Factor
Product/Service capability 10
Product Supportability 10
Commercial considerations, including Total Cost of Ownership (TCO) 10
Manufacturer or vendor track record 9
Existing installed base2 9
Implementation considerations 9
Product category coverage 8
Product Manageability 8
Manufacturer or vendor strategy 8
Global Supply model 7
Manufacturer or vendor partnerships 7
Market Research reports 7
Manufacturer or vendor references 6

AKF Scale Cube – Foundation for WFaaS Service Level Agreements?

Workflow scalability leverages all three (3) axes of the AKF Scale Cube

I want to talk about a 3D visualization technique for scalability issues as applied in the context of WFaaS, namely, the AKF Scale Cube. My reference for this discussion is “The Art of Scalability: Scalable Web Architecture, Processes, and Organizations for the Modern Enterprise”, M. Abbott, M. Fisher, Addison-Wesley Professional, 2009.

First an abbreviated description of the AKF Application Scale Cube.

The three (3) axes of the scale cube, X, Y, and Z represent applications and divisions of processes or work activities.

  1. “The x-axis of the AKF Application Scale Cube represents the cloning of an application or service such that work can easily be distributed across instances with absolutely no bias.X-axis implementations tend to be easy to conceptualize and typically can be implementedat relatively low cost. They are the most cost-effective way of scaling transaction growth. Theycan be easily cloned within your production environment from existing systems or “jumpstarted”from “golden master” copies of systems. They do not tend to increase the complexity of youroperations or production environment.”
  2. “The y-axis of the AKF Application Scale Cube represents separation of work by service or function within the application.Y-axis splits are meant to address the issues associated with growth and complexity in codebase and datasets. The intent is to create both fault isolation as well as reduction in responsetimes for y-axis split transactions.Y-axis splits can scale transactions, data sizes, and code base sizes. They are most effective in scaling the size and complexity of your code base. They tend to cost a bit more than xaxis splits as the engineering team either needs to rewrite services or at the very least disaggregate them from the original monolithic application.”
  3. “The z-axis of the AKF Application Scale Cube represents separation of work based onattributes that are looked up or determined at the time of the transaction. Most often, these areimplemented as splits by requestor, customer, or client.Z-axis splits tend to be the most costly implementation of the three types of splits. Althoughsoftware does not necessarily need to be disaggregated into services, it does need to be written such that unique pods can be implemented. Very often, a lookup service or deterministicalgorithm will need to be written for these types of splits.Z-axis splits aid in scaling transaction growth, may aid in scaling instruction sets, and aids indecreasing processing time by limiting the data necessary to perform any transaction. The zaxis is most effective at scaling growth in customers or clients.”