Cloud-based Intellectual Property Rights – Part 2 – Applying “aspects” to a DRaaS design pattern…

The proliferation of the Internet and subsequent Web services exposed a heretofore unseen level of software and hardware security exposures that engineers had previously considered unimportant or even irrelevant. Over the past 10 years, the technical community has produced an enormous amount of research and development into Web services security architectures. Cloud computing services & application development through IaaS, SaaS, PaaS, & WFaaS services has introduced yet another level of system variable integration through concepts such as multi-tenancy, application mash-ups, and data security. Developers now have to begin addressing another level of security architectures in cloud design patterns.  In this blog, I want to spend some time talking about security “aspects” that can be applied to a specific cloud computing pattern, Digital-Rights-as-a-Service (DRaaS).

I’ll introduce the use of a UML profile that will illustrate the application of defining a security “aspect” to a single component of a DRaaS design pattern XML schema.

First, from a high level, let’s review the concept of Aspect-Oriented-Programming (AOP).  The software engineering community has long come to grips with the theory and application of Object-Oriented-Programming (OOP) as a staple methodology practice for application development.  While modularity was a foundation of OOP, as applications became more complex and more ubiquitous, code modularity came at a price of code replication across object classes where common methods or functionality needed to be implemented.  In a response to reducing the code replication, developers would create references and exploit polymorphism across objects. While the code was still modular in nature, the system’s overall common functional code became “scattered” across the application’s class hierarchy, which in turn had a direct effect on the system code’s maintainability.  It became apparent that something more was needed to address OOP code scattering across object hierarchies with common functional methods.  This is time when definitions for crosscutting concerns, aspects, advices, join points, and point cuts entered the genre of advanced OOP, i.e. AOP.  I’ll leave the deep dive tutorial on AOP to the reader.

Security parameters are “cross-cutting concerns”, that is, similar security parameters affect multiple objects within the object hierarchy. Such security cross-cutting concerns must be implemented as “aspects”. However, the operative questions that arise are as follows:

  1. If we use an XML Schema as a base for a DRaaS, can we apply the use of aspects to our advantage in the context of a DRaaS XML XSLT?
  2. In Web services, information exchange is defined through secure XML documents.  How can digital rights be expressed in aspects for XML?
  3. Can XML aspects be carried in both a static and dynamic security model for intellectual property?
  4. Can we define a XML aspect methodology that can reverse engineer and track security breaches for cloud-based DRaaS services?

Reference material:

  1. “HyperAdapt: Enabling Aspects for XML”, M. Miederhausen, S. Karol, U. Assman, K. Meissner
  2. “A model-based aspect-oriented framework for building intrusion-aware software systems”, Z. Zhu, M. Zulkernine, Information and Software Technology, Vol 51, 2009
  3. “Automated analysis of security-design models”, D. Basin, M. Clavel, J. Doser, M. Egea, Information and Software Technology, Vol 51, 2009



Aspect insertion in XML Documents

More to come…looking at this as a major chapter in an upcoming book proposal on “Meta-applications in Cloud Computing…”

Value Chain Evolution (VCE) Theory in the cloud – What do ISVs REALLY believe about cloud services?

Organizational theory tells us that change is an extremely hard thing to enact within an organization. Additionally, change can come in different forms, e.g. business, cultural, or structural.  For a software company to undertake a foundational change in their business model that could have a major impact on the company’s cash flow, we should anticipate managerial resistance.  If that software company is publicly traded, then count on plenty of questions from Wall Street analysts regarding the soundness of any such decision that introduces financial risk. Risk is a direct consequence of: (1) scarcity of knowledge, (2) absence of facts, and (3) uncertainty of future events. Only through a reduction or elimination of any or all of the three (3) factors can risk be mitigated or absolved.

See...I told you so!

Value Chain Evolution (VCE) Theory is concerned with the technological and economic disruptions that affect supply chains. As valuation points within a supply chain become outsourced to niche or specialty suppliers, disruptions are in play. A technology that served a niche market, either through price or functional features, but lacks the performance trajectory to satisfy the established market, will continue to improve it’s performance in such a manner as to overtake the incumbent solutions. It is my opinion that cloud computing is a technology disruption that is on a similar performance trajectory. The subtleness of the cloud computing model lies in a employment of ideas both old and new, a combination of technology and the application of a new business models for computing.

The following points highlight what I have perceived as the evolution of the cloud computing market.

  1. The enabling technology of the Internet provided a data communications foundation upon which large clusters of cheap personal computers could be utilized in compute intensive applications.
  2. The personal computer then became a “server” resource. The market responded by delivering a high density clustering solutions by re-engineering the packaging solutions for high end Intel-based computers.
    • As a result, a new type of data center emerged.
  3. The management requirement of this new data center gave birth to a new software sector, designated “middleware”.
  4. Out of the requirement to optimize the resource utilization of the vast processing resources in the new data center re-emerged the application of an older mainframe concept, namely, virtualization or hypervisors.

What was previously considered a corporate requirement to develop, establish, and manage in-house data centers, can now be outsourced to specialty suppliers who can provide a higher grade of both functional and price performance. What was previously considered a competitive differentiation has become a commodity, becoming “good enough” to be outsourced.

However, cloud computing services have created a new dilemma for software companies.  Traditional software leasing models are in competition with cloud-based Software-as-a-Service (SaaS) offerings by cloud service providers.  The operative questions to ISV product management look something like this:

  1. Will adoption of an SaaS licensing model reduce my present revenue stream?
  2. Will an SaaS model cut into my profit margins?
  3. How much new RnD investment will be needed to transition to an SaaS service model?
  4. How much cannibalization of my present software leasing models will result from an SaaS service?
  5. What competitive advantages/disadvantages will an SaaS “first mover” have/suffer?
  6. Can “opportunity cost” be defined or qualified between SaaS services and traditional software licensing models?
  7. In the VCE phraseology, should we “fight or flee” in the face of a competitive SaaS threat to our existing market share?
  8. Can the SaaS and traditional SW licensing models peacefully coexist?  Are they synergistic or symbiotic?

The discussion that follows explores some of the psychology impacted by a software company transitioning an application to a Software-as-a-Service (SaaS) business model.

I’d like to set the stage first with a discussion about “opportunity cost”.  What is it?  Why worry about opportunities that are “lost” if we never new about about them in the first place?  Does a tree that falls in the forest make noise if no one is there to hear it?  If someone is not using a company’s software product or not using as much of it as they would like to, who should be concerned about this issue?

“The consequences some of which we have already touched upon in previous chapters can be summarized in two words “lost opportunity”. The basic idea of opportunity is that for any choice you make, any action you undertake, you are giving up some other choice or action you could have done instead and that best alternative is the opportunity of the choice you made. In other words, there is always an opportunity cost decision made when stakeholder choices are determined.”

“The cost of the next best alternative – if we spend on this what is the value of the foregone alternative? Note: correct “economic decision-making” uses opportunity cost measures.”

(Managing Stakeholders in Software Development Projects, Butterworth-Heinemann, 12/27/2004, John McManus)

It's time for your annual renewal!!

Evolution of Software Licensing Models

I don’t think it is necessary to walk this history all the way back to the days of IBM’s hardware leasing agreements for the IBM 360 mainframes, but it is worth thinking about the economics of IBM’s early mainframe leasing models. In the early days of the IBM 360, customers were not allowed to purchase an IBM mainframe. The only use option available was to lease the use of the mainframe and pay IBM Net30, Net60, or some other agree upon annuity of payments. What this did for IBM’s business model was several things:

  1. IBM retained the ownership of the actual hardware, thereby allowing a higher degree of restrictions upon the customer as to how, where, and under what conditions, the equipment would be allowed to be under the control of the customer. These restrictions served as a deterrent, albeit minor in some eyes, of allowing competitors to easily re-create the hardware design of the IBM 360.
  2. The annuity of leasing payments from customers created a predictable revenue stream for IBM. Wall Street analysts love predictable revenue streams. It makes life very easy for equity valuations!
  3. The downside of the leasing arrangement was that IBM was always carrying an exorbitantly high level of capital expenses. Although, this was probably less of a downside than it seems given the selection of depreciation methods afforded to GAAP accountants.

Software is not subject to the high fixed costs associated with hardware. The marginal costs of software is effectively zero, and gross margins for software approach 90%. With the emergence of the PC, consumers were thrown into a whirlwind of software purchasing options that even the software manufacturers couldn’t keep up with as they attempted to control the burgeoning black market of software piracy. Consumers were thrust into a new world of ethical dilemmas as fledgling PC software market was figuratively weaning itself from milk to solid food. Even today, Microsoft’s product key strategy is a reasonably successful mechanism for localizing the illegal distribution of their software products…kinda like localizing the spread of the Ebola virus in small villages in the Sudan!

For the consumer market, outside of the yearly cash cow of tax software, the perpetual licensing model is still the norm. Bill Gates owes his wealth to the creativity of Microsoft programmers that were able to continue a stream of addictive enhancements that keeps consumers in that rat race of “upgrades”! (I’ve always been a sucker for the “latest version”!) However, when ISVs began extending these perpetual licensing models into the enterprise, it became clear very quickly that the real “negotiating” factor that created the revenue predictability that Wall Street demanded lay in the annual “maintenance fees”.

The software maintenance fees could rise as high as 30% of the list price of the software owned by a customer. What some enterprising enterprise companies began to do was avoid paying the maintenance cost and simply live off of the original software version of their purchased 99-year perpetual licenses. In this manner, they could reduce their annual costs until such time they could no longer live with the old version of software due to either new technology supported features or the software bugs became too hard to live with.

This “deferment strategy” of maintenance costs didn’t sit well with the ISVs, so they had to come up with a way to deal with these “non-sequiturs” on the loose. The result would be that the ISVs would inact a “pay the piper” policy for non-complying annual maintenance contracts. When the non-complying company did come back for the latest version, the ISV account manager would not agree to any discounted price off of the list. In fact, the customer would be charged for back payments for annual maintenance, else they could take their business somewhere else! Ah….the sweet conundrum of “lock in”!

As it turned out, from the viewpoint of a securities analyst or accountant, the annual maintenance fees began looking more like a time-based subscription fee or annuity and the perpetual license cost appeared as a “set up” cost or customer initiation fee. It was primarily a year’s worth of GAAP accounting principles that stood in the way of simply foregoing the perpetual license fee and proceeding to a straight up multi-year lease of the software licenses with the annual maintenance fees built into the leasing terms.

This is the unfortunate state the enterprise software customers are dealing with managing their present software needs. It has become a full time job for someone or some group of personnel to manage the software licensing negotiations for respective software applications that the company purchases. In a cloud computing Software-as-a-Service (SaaS) business model, such process complexities expose quantifiable cost savings for both end users and ISVs. However, any transition to a SaaS model must not be a one-way street of cost savings at the expense of the ISVs. This simply will not work. We must develop a SaaS transition model that will accommodate financial advantages for both customers and suppliers.

Note Michael Cusumano’s book, The Business Of Software for the evolution of software business models.  Software companies today look very much like consulting companies, and consulting companies have begun to look more like software companies.

Open Source Nightmare!

 

Yeah, baby!

Good for operating systems and web servers, but…but what? The open source LAMP model for web services dominates the Internet landscape. Case closed, open source wins, right? No way, but the open source movement has certainly made it’s presence known, and in more ways than simply providing software code and executable binaries.

Categories of SW applications that affect licensing models:

  1. Consumer
  2. Engineering RnD
  3. Professional

More to come…

WFaaS – From “what can” to “what could”…”Designer” class – A transition from constraint to opportunity

 

"Round about the accredited and orderly facts of every science there
  ever floats a sort of dust-cloud of exceptional observations, of
  occurrences minute and irregular and seldom met with, which it always
  proves more easy to ignore than to attend to... Anyone will renovate his
  science who will steadily look after the irregular phenomena, and when
  science is renewed, its new formulas often have more of the voice of the
  exceptions in them than of what were supposed to be the rules."
    - William James

 

 

This will NEVER work!

We’ve heard the reference to certain people as being a “Negative Nelly”, i.e. a person who always sees the glass as half empty, as opposed to the eternal optimist who always sees the glass as half full.  By most people’s standards, engineers would fall into the Negative Nelly category.  Engineers are paid to design to specifications that consider the worst case scenario.  In fact, if engineers were not trained to think and design in this manner, I’m not sure as many of us would be that enthused about flying commercial airliners. Sales and marketing teams would tend to tilt on the eternal optimist side of the equation, and we also tend to like this characteristic in these individuals as well, except of course, when their optimism lands the CFO on the wrong side of a Sarbanes-Oxley audit. Overall though, given ethical, moral, and competent executive corporate management, these two extremes in corporate cultures seem to balance each other out.

This blog entry’s title phrase, “what can” identifies a Designer mindset that says, “…this is what we can do with the resources that we presently have.”  It is an engineering equivalent of saying that our glass is half empty. The corresponding phrase “what could” identifies a mindset that says, “…IF we had access to this or that, with no restrictions on our resources, then this is what we could accomplish.” In essence, the Designer/engineering equivalent that says our glass is half full.

In my previous analysis, using Value Chain Evolution Theory as applied of the semiconductor supply chain, I identified four (4) disruptive innovations that overlaid on four (4) supply chain stakeholder “identity profiles”. One of those identity profiles was the “Designer” class identity profile.  The Designer class identity profile is applicable to many engineering or technology related supply chains, not just the semiconductor supply chain.

I profile Designer class cloud computing users by the following characteristics…

  1. Is in a Research & Development role.
  2. Active in product development (1) architecture, (2) engineering, (3) implementation, (4) verification, (5) quality control or (6) support.
  3. Generates product-related intellectual property that is part and parcel to the core business operations of the company.
  4. Work methodologies are engineering centric.
  5. Uses computing-aided design methodologies and/or applications in some form for execution of their daily activities.

I’ll probably expand upon these definitions as I continue to think about it, but I am seeking a generalized use case profile for Designers that can be ubiquitously applied across many RnD disciplines.

This blog entry will focus primarily on the following indirect network effects of WorkFlow-as-a-Service (WFaaS) cloud services have upon the Designer class cloud identity profile, namely this…

  • Is there a change in Designer class operational behaviors when SW application laden workflows are provisioned through demand-based utility computing services, i.e. WFaaS enterprise architectures?
  • How are Designers “constrained” by present software licensing models, and why do WorkFlows-as-a-Service represent “opportunity” for Designer class utility computing users?
  • What impact do these WFaaS-induced behavioral changes have upon Software-as-a-Service (SaaS) revenue models?
  • Can such WFaaS operational changes be quantified in a constraint model that can then define the opportunity cost benefit of WFaaS cloud services?
    • I propose here the use of SysML (systems engineering) parametric models that constrain AND integrate BOTH architectural performance features AND financial & marketing metrics to create a quantitative model to be “solved” under constrained parameters.
      • Such models can then be visualized and iterated through a suite of parameter ranges.

Let”s walk through the early budgeting and scheduling cycle for a semiconductor chip project. The use of computer-aided design software and processing resources is fundamental to the project execution and must be addressed at the inception of the project management.  Electronic Design Automation (EDA) software applications are among the most expensive software licenses that can be purchased, where a single license can list as high as $1M. If you are a small/Tier 2 chip design company that is not considered an “enterprise” customer for an EDA company, discounts off of these list prices will be slim.

The chip project manager must begin the scheduling & budgeting analysis by first determining what percentage of the project budget can be assigned to EDA software.

Let’s stop right here!

Why is the schedule involved in this decision? Because the amount of EDA software you can afford will have a direct impact upon how fast you can get your chip completed.  If the chip team could only afford ONE (1) simulation license, they would NEVER be able to run enough simulation cycles to verify the chip’s functional behavior. If the project purchases too many licenses needed for a particular phase of the chip design, then a lot of wasted money has just been flushed. If a company could only afford the base logic synthesis license and unable to afford a physical synthesis tool, then the designers would spend ten times as much time trying to optimize the chip using older technology methods. The point here is that the volume and sophistication of EDA software that your company can afford will become a determining factor in HOW FAST and WITH WHAT QUALITY your design team will be able to get the chip ready for manufacturing and product deployment.  And remember, without the quality, the chances of a first pass manufacturing false start are extremely high and will usually mean the death of the chip, maybe even the company.

Let’s just pause for a moment here and think about the implications of the project manager having to trade off the ultimate success or failure of the project on how much EDA software the company can afford. During a chip design project the use of EDA software is NOT an option.  The project will use upwards of ten different critical EDA software tool applications in the development of the chip. The use and volume of licenses of EDA software will come at different times in the development process. Therefore, astute project management teams will take the time and effort to try to match the use demand of the software they need to the date in the project schedule that their design teams need the software.

Software licensing management appears more constraining than liberating!

EDA sales teams do not like this type of negotiation. They are seeking to close the maximum amount of revenue to meet their sales quotas. Herein lies the dichotomy that is plaguing this software/design project relationship, namely, what is good for the software company, IS NOT good for the customer. If you are operating your business with this underlying premise, you are not going to have a happy relationship with your customer base. Am I exaggerating this? Go ask any chip project manager how much he loves the annual visit from his assigned EDA sales account manager, and you will find out how satisfied they are with the present status quo.

If there was ever any industry software relationship in more dire need of a Software-as-a-Service (SaaS), demand use model, it is the semiconductor design industry. Oh sure, the EDA companies will often cite their “RE-MIX” policy where they GENEROUSLY allow their customers to exchange their software licenses on some prorated basis, but does anyone actually believe that this is addressing the customer’s FUNDAMENTAL needs? No! Again, ask any chip project manager if this is what he/she had in mind when EDA software companies offered software license re-mixes as their solution to demand-based utility computing? You get my point!

Cars-as-a-Service (CaaS)?? – Opportunity costs in the automobile industry

I’ll reiterate another point in the debate on cloudonomics that I recently read.

The discussion surrounded who car manufacturers are willing to sell their cars to. We think about three (3) profiles of car buyers: (1) consumer sale, (2) rental cars, and (3) taxicabs. Car manufacturers initiated their business models through the sale of their product directly to the end users. Clearly, this business represents the overwhelming percentage of sales. However, car manufacturers do not prohibit the sales of their cars to companies that are in the rental car business. For the most part, car manufacturers actually embrace their rental car customers in strategic alliances for exclusive sale of their cars, going so far as to actually acquire large rental car companies as separate business units of their operations. It is also not strange that some people use a rental car company as a channel for a final domestic sales decision for a particular model by renting the prospective model from the rental company and drive the car around for a weekend to see if they like it. In a similar vane, car manufacturers sell cars to taxi cab companies. Pricing of course is commensurate with the target market.

Unless we live in Manhattan, we do not all drive around in taxicabs all day, because it is simply not an economically viable means of transportation. However, taking a taxicab to the airport makes a lot of sense and the regulatory rates charged by taxicab companies seems to be a business model that profitably keeps the taxicab companies in business. Similarly, neither do we all drive around in rental cars, but rather on extended stays during business travel. Renting a car in these circumstances makes a lot of sense, and a healthy rental car market keeps rates competitive.

The existence of the rental car and taxicab markets represents a transition from CONSTRAINT TO OPPORTUNITY! I contend that the demand use model exhibited by the rental car and taxicab companies is the PERFECT example of opportunity costs that should be considered by the software industry. The car manufacturers simply view these adjacent use models for their products as part of their NATURAL market.  They do not view these markets as anomalous behaviors that must be controlled or repressed by non-free-market strategies.

In this light, neither should software companies view the utility demand pricing models offered through ubiquitous computing services as a market threat that must be controlled through collusive activities among competitors. These changes are nothing more than a live examples of Value Chain Evolution (VCE) theory at work. Software businesses that seek to maximize their revenues will be positively served by adapting to these market forces. The key component in forming an adaptation strategy for cloud computing is to seek OPPORTUNITY NOT CONSTRAINT!

Skeptics immediately contend that the heavy manufacturing business of automobile manufacturing has virtually no resemblance to the creative science of software design.  After all the fixed and marginal cost models between software and automobile manufacturing represent perfect diametrical positions. In addition, such skeptics may contest that VCE Theory does not apply to the software industry. But aren’t software workflows an example of modularity?

It is easy to dismiss VCE Theory and disruption as not applicable to one’s own business sector, and seek to “spread fear in the name of righteousness.” Their attitude should be more closely aligned with Intel’s Andy Grove, i.e. paranoia. Don’t run from change, embrace it! That is precisely what Intel did when the company changed their corporate strategy to focus on the microprocessor market rather than the memory market. Intel’s executives were able to see how the market changes would affect their corporate strategy and subsequently embraced the changing market conditions to the company’s magnificent advantage and success. The same must be true for software companies who must embrace the business and technology changes being ushered in by the cloud computing market.

More to come…

DRaaS – Part 1 – Indirect network effects of cloud-based IP licensing and distribution…

This blog entry will opine on the following topics and questions…

  • What are indirect network effects as defined in the context of collaborative, cloud-computing IT systems?
  • Cloud computing enterprise architectures (IaaS, PaaS, SaaS) are creating a host of new valuation points in complex supply chains…
    • Collaboration-centric IT enterprise architectures create such indirect network effects.
    • One crystallization of IaaS/PaaS/SaaS design patterns is WorkFlow-as-a-Service (WFaaS).
  • WorkFlow-as-a-Service (WFaaS) & Digital-Rights-as-a-Service (DRaaS) are two integrally related, highly interdependent cloud services design patterns…

An MBA view of network effects

I want to first frame the impact of Intellectual Property (IP) in light of my expertise in semiconductor System-on-Chip (SoC) design methodology (Please refer to my blog entry on the supply chain evolution of the semiconductor industry to gain a greater insight into delivery of IP into the supply chain of semiconductor supply chain). I will then generalize the discussion of indirect network effects for IP in orthogonal supply chains.

Indirect Network Effects

“In markets with indirect network effects, the value of any component does not depend directly on the number of other users of that component (hence the terminology), but rather on the availability of complementary and compatible components. For example, a PC is more valuable as the set of available software for that PC grows.”

(The Handbook of Technology and Innovation Management, Wiley-Blackwell, 7/14/2009, Scott Shane)

“Next to these direct effects, so-called indirect network effects are recognized. This is the case when adoption of a standard itself does not offer direct benefits on other users of the standard, but the adoption of the standard might ultimately benefit others. The distinction between direct and indirect refers to the source of benefit to participants in the network not necessarily to the magnitude of the network effect. For example, greater adoption of Xbox® consoles should generate greater variety in Xbox 360™ game titles. Common adoption would allow producers to achieve scale more easily. Katz and Shapiro (1985) showed how an indirect network effect i.e. the availability of software to support a hardware standard) made the more popular standard more attractive to future adopters. Other consumers or producers are likely to adopt such benefits as well.”

(Toward Corporate IT Standardization Management: Frameworks and Solutions, IGI Global, 2/28/2010, Robert van Wessel)

The hyper-effect of IP in a cloud!

To set the stage for semiconductor SoC design, we have to look again no further than the now famous, Moore’s Law. Moore’s Law simply states that the “quantity of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years (http://en.wikipedia.org/wiki/Moore’s_law). The theoretical effect of Moore’s Law has been an exponential increase in the state space as defined by binary logic circuits that compose an integrated circuit’s functionality. The practical effect for large integrated circuit designs has been that it has become computationally impossible to verify the functional state of these devices before committing the component to manufacture.

Therein lies the impetus to leverage pre-packaged, pre-verified, circuits that can be placed as a self-contained functional blocks. We in the semiconductor industry call this design process System-on-Chip or SoC. The effect that we want to explore in this blog is: how does provisioning semiconductor IP within a collaborative computing cloud through a secure Digital-Rights-as-a-Service (DRaaS) design pattern create indirect network effects. In many ways, both DIRECT and INDIRECT network effects are spawned from Intellectual Property (IP), whether that IP is patent protected or copyrighted. In the case of INDIRECT network effects, the multiplicity of complementary IP components or products creates greater value in foundational IP element or product.

Indirect network effects come in the strangest ways –> http://www.prohiphop.com/2006/08/network_effects.html

WorkFlow-as-a-Service (WFaaS) – Catalyst for design enablement…crystallizing IaaS/PaaS/SaaS cloud enterprise architectures

As a design engineer, I leveraged computer-aided-design in the execution of my part of my daily routine, thinking very little about the concept of utility computing or cloud computing.  From the perspective of a user who has access to all of the computing resources needed to do their job, why would cloud computing be an interesting topic?  Why all the fuss about Infrastructure-as-a-Service (IaaS), Software-as-a-Service (SaaS), and then Platform-as-a-Service (PaaS)?

Entrepreneurial endeavors are in themselves transformative events for the entrepreneur.  While the entrepreneurial experience is often not a net positive outcome for the overwhelming number of attempts, the transformed entrepreneur no longer sees the world through the prism of an employee, but now through the clarity of how does a business make money?  When you begin to examine the operational aspect of a company in this light, just about every aspect of how a company executes it’s mission statement is examined for (1) competitiveness, (2) cost, (3) efficiency, and (4) profitability.

It was through the prism of an entrepreneur that I began examining the grass-roots question of  WHY this cloud computing movement was gaining steam.  I began by delving deep into the use case models for each of the basic cloud services offerings, i.e. IaaS, SaaS, and PaaS.  My only effective vantage point that I could examine the viability of these services was from an RnD perspective, so that is where I begin this analysis.

In this blog entry, I will attempt to help qualify the catalyzing effects of IaaS, PaaS, and SaaS enterprise architectures into higher forms of cloud services, such as WorkFlow-as-a-Service (WFaaS) and Digital-Rights-as-a-Service (DRaaS).

Software workflows – “The whole is more than the sum of its parts.” Aristotle, Metaphysica

Modularity, the Zen of Object Oriented Programming.  The endless trek toward complete software re-usability lies in modularity.  It extends into every fabric and every level of abstraction in the software industry. The Open Source Software movement perhaps exemplifies the essence of modularity in software applications. Software modularity has extreme depth in it’s employment. At the lowest level, software modularity is manifested in functional libraries, e.g. the C++ Standard Template Library (STL) or the Linux operating system.  At the highest levels, software modularity is implemented at the application layer, the most prominent example being Microsoft Office. Software products that is designed for modularity at the application layer are done so with “workflows” in mind.

The Web-based workflows – Mashups?

High Performance Computing (HPC), Manufacturing, & RnD – Drivers Behind Elastic Processing Demand

References:

Toward Corporate IT Standardization Management: Frameworks and Solutions, Robert van Wessel, IGI Global, February 28, 2010

“Standards can result in variety reduction, thereby lowering production costs and creating economies of scale. This refers to the condition where the cost of producing an extra unit of a product decreases as the volume of output increases, in other words the variable costs go down. When the variable costs are low and the fixed costs are high, this may cause a significant entry barrier for competitors, which can prohibit new players from entering the market. When the fixed costs are reduced by an innovation this barrier could be removed. A network externality is a benefit granted to users of such a product by another’s purchase of the product, i.e. every new user in the network increases the value of being connected to that network.”

“Moreover, network effects arise when consumers value compatibility with others, creating economies of scope between different consumers’ purchases. This behaviour often stems from the ability to take advantage of the same features of products and processes. The bandwagon effect occurs when first adopters make a unilateral public commitment to one standard. First adopters of a standard take the highest risk, but they also have the benefit of developing competence early. If others follow the lead they will be compatible at least with the first mover, and potentially with the followers. Bandwagon pressures are caused by the fear of non-adopters appearing different from adopters and possibly performing at a below-average level, if competitors substantially benefit from the standard. So organizations are pressured to adopt standards by the sheer number of adopting organizations in the market even when individual assessments of the merits of standard adoption are unclear. In other words there is a path dependency meaning that decisions by later adopters of a standard depend strongly on those by made by previous adopters. The stronger the network effects, the higher the probability that market mechanisms do not work as they should in selecting superior standards as this is influenced by historical events (e.g. QWERTY keyboard versus Dvorak Simplified Keyboard).”

“Next to these direct effects, so-called indirect network effects are recognized. This is the case when adoption of a standard itself does not offer direct benefits on other users of the standard, but the adoption of the standard might ultimately benefit others. The distinction between direct and indirect refers to the source of benefit to participants in the network, not necessarily to the magnitude of the network effect. For example, greater adoption of Xbox® consoles should generate greater variety in Xbox 360™ game titles. Common adoption would allow producers to achieve scale more easily. Katz and Shapiro (1985) showed how an indirect network effect (i.e. the availability of software to support a hardware standard) made the more popular standard more attractive to future adopters (p. 424). Other consumers or producers are likely to adopt such benefits as well.”

Table 1. Selection criteria for IT Standards
Selection Criteria Weighting Factor
Product/Service capability 10
Product Supportability 10
Commercial considerations, including Total Cost of Ownership (TCO) 10
Manufacturer or vendor track record 9
Existing installed base2 9
Implementation considerations 9
Product category coverage 8
Product Manageability 8
Manufacturer or vendor strategy 8
Global Supply model 7
Manufacturer or vendor partnerships 7
Market Research reports 7
Manufacturer or vendor references 6

AKF Scale Cube – Foundation for WFaaS Service Level Agreements?

Workflow scalability leverages all three (3) axes of the AKF Scale Cube

I want to talk about a 3D visualization technique for scalability issues as applied in the context of WFaaS, namely, the AKF Scale Cube. My reference for this discussion is “The Art of Scalability: Scalable Web Architecture, Processes, and Organizations for the Modern Enterprise”, M. Abbott, M. Fisher, Addison-Wesley Professional, 2009.

First an abbreviated description of the AKF Application Scale Cube.

The three (3) axes of the scale cube, X, Y, and Z represent applications and divisions of processes or work activities.

  1. “The x-axis of the AKF Application Scale Cube represents the cloning of an application or service such that work can easily be distributed across instances with absolutely no bias.X-axis implementations tend to be easy to conceptualize and typically can be implementedat relatively low cost. They are the most cost-effective way of scaling transaction growth. Theycan be easily cloned within your production environment from existing systems or “jumpstarted”from “golden master” copies of systems. They do not tend to increase the complexity of youroperations or production environment.”
  2. “The y-axis of the AKF Application Scale Cube represents separation of work by service or function within the application.Y-axis splits are meant to address the issues associated with growth and complexity in codebase and datasets. The intent is to create both fault isolation as well as reduction in responsetimes for y-axis split transactions.Y-axis splits can scale transactions, data sizes, and code base sizes. They are most effective in scaling the size and complexity of your code base. They tend to cost a bit more than xaxis splits as the engineering team either needs to rewrite services or at the very least disaggregate them from the original monolithic application.”
  3. “The z-axis of the AKF Application Scale Cube represents separation of work based onattributes that are looked up or determined at the time of the transaction. Most often, these areimplemented as splits by requestor, customer, or client.Z-axis splits tend to be the most costly implementation of the three types of splits. Althoughsoftware does not necessarily need to be disaggregated into services, it does need to be written such that unique pods can be implemented. Very often, a lookup service or deterministicalgorithm will need to be written for these types of splits.Z-axis splits aid in scaling transaction growth, may aid in scaling instruction sets, and aids indecreasing processing time by limiting the data necessary to perform any transaction. The zaxis is most effective at scaling growth in customers or clients.”

 

Maturation of the of semiconductor supply chain – a 20 year case study in macro Value Chain Evolution (VCE) Theory

In this blog entry, I will opine on the application of Harvard Business School’s, Clayton Christensen’s Value Chain Evolution (VCE, http://www.claytonchristensen.com/disruptive_innovation.html) Theory to the semiconductor supply chain.  Christensen’s exposition on VCE theory as applied to supply chain management concentrated on micro valuation points in a product’s evolving modular architecture.  In this blog entry, I want to focus on macro valuation points in the larger semiconductor supply chain.

During the early days of the inception of the integrated circuit, perhaps only Jack Kilby (Texas Instruments) and Gordon Moore (Intel) could have envisioned the intricate web that has become the semiconductor supply chain. Vertical integration by early adopters is a necessary component to mastering product development and delivery within the supply chain.  As with any supply chain in it’s infancy, there are simply too many unknown parameters to modularize the valuation points immediately. During this period of early stage evolution of the semiconductor supply chain, companies were completely vertically integrated for their semiconductor needs. Transistor design, library circuits, functional design and simulation, physical layout, wafer processing, manufacturing test, slicing & dicing, final packaging, more manufacturing tests, product assembly, system test, and shipment were ALL performed by the same company.  Not surprisingly, the preeminent company that stood out in those early days, and interestingly enough is the fact that this same company still represents the highest vertical silo within the semiconductor industry, of vertical integration of their semiconductor supply chain was the IBM Corporation.

Three semiconductor supply chain modularities, a fourth is emerging

As outlined in the figure above, the semiconductor industry has undergone three (3) major inflections in it’s value chain: (1) the Electronic Design Automation (EDA) software tools used to design integrated circuits, (2) multi-tenant manufacturing silicon foundries that gave rise to the fabless semiconductor company, and (3) a Semiconductor Intellectual Property (SIP) market that drives ultra-large scale integration for Systems-on-Chip (SoC) development.  Stimulated by the advance of the Internet, a utility computing service model, aka, cloud computing, we are now seeing a fourth valuation point inflection emerge in the form of outsourcing of the semiconductor design-to-release-manufacturing workflow infrastructure.  This fourth inflection point is a manifestation of the first WorkFlow-as-a-Service (WFaaS) cloud design pattern.

My primary contribution in this blog entry is establishing a direct correlation for each of the four (4) semiconductor supply chain inflection points with a distinct identity profile: (1) EDA Supplier, (2) Foundry, (3) IP Provider, and (4) Designer. The taxonomy of such identity profiles will define and intersect issues of privacy, security, anonymity, certification, authentication, governance, accountability, and reputation within a cloud computing enterprise architecture.  When implementing a cloud-based WorkFlow-as-a-Service (WFaaS) for semiconductor design-to-release-manufacturing, the four (4) semiconductor supply chain identity profiles exist as part of a reputation system.  Multiple identity profiles may be assigned to users, but must still adhere to identity profile restrictions. When class objects are augmented with cross-cutting concerns, such inter-class characteristics are called aspects.  The application of aspect-oriented programming to XML schemas used in WFaaS cloud services may prove to be an optimal solution.  This is another area of active research for a future blog entry.

(1) Emergence of the EDA software industry (EDA Supplier Class)

Moore’s Law is at the root of integrated circuit complexity.  The “march to the sea” campaign for ever increasing transistor density and performance created what has become total dependence on computer-aided design software applications for the successful design and manufacture of all semiconductor components.  Computer-aided design software for semiconductors, aka, Electronic Design Automation (EDA) was birthed and matured in corporate research laboratories and leading academic engineering institutions.  Early EDA software applications (“tools” to design engineers), were not developed by professional software engineers, but by the engineers who actually used the software.  If there was a bug in the EDA software, it was identified by the user and in early cases, fixed in the source code by the user.

As the EDA tools were required to do more and more, the application’s complexity increased.  Over time the engineering team’s dependency upon the EDA tools became intractable, and engineering management was forced to form separate EDA design teams within their engineering organizations. As the engineering organizations continued to expand, the EDA tool usage disseminated across corporate business units, with correspondingly increased demands upon EDA teams to train and support their design engineering customer base.  The immaturity and lack of standards created a wide range of engineering productivity, component performance, and final product differentiation for semiconductor companies.  Maintaining an edge in EDA tool capability translated directly into market advantages.  Semiconductor companies protected their EDA tool capabilities with as much security as their semiconductor intellectual property.

As semiconductor companies grew and semiconductor design teams and employees migrated between companies, the EDA technology osmosis eroded the competitive advantages in EDA tools.  What shortly followed was the formation of startup companies armed with the copyright and licensing for the EDA software applications formerly held by their new customers.  Today, the big three (3) EDA companies, Synopsys, Cadence Design Systems, and Mentor Graphics, dominate the approximately ~$3.5B/yr EDA software market.

(2) The Fabless Semiconductor Industry (Foundry Class)

 

Wafer cooking!

The “big daddy” of the semiconductor supply chain is the silicon foundry.  If you’re thinking of building your own 32nm silicon wafer processing facility, you better have north of $4B in your bank account.  Silicon foundry processing is a “pay to play” game and not for the faint of heart.  The expression “Real men own fabs”, was not intended as a sexist remark, but became a true symbol of power in the semiconductor industry.  Just ask the largest multi-tenant silicon manufacturing company in the world, Taiwan Semiconductor Manufacturing Corporation (TSMC), how it feels to own their own silicon foundry.

The enormous fixed cost capital that came into play for manufacturing and maintaining a leading edge silicon foundry has been the sole driving force behind the modularity of this inflection point in the semiconductor supply chain.  Why the sole driving component? Because there can be tremendous differentiation that can be gained in multiple product dimensions from an advanced silicon foundry process. The most recent example is the business case for the advanced silicon process technology, Silicon-On-Insulator (SOI, pronounced soy) advanced primarily by the IBM Corporation.  SOI processing had distinct technological advances in both power and performance, but the necessary silicon wafer manufacturing volumes that were needed to make the investment in SOI fabs profitable was higher than IBM’s internal consumption.  Unless IBM sought to service external customer SOI wafer demand, while also partnering with other silicon foundries, even with IBM’s vast capital resources, the cost of capital investment for maintaining a proprietary advantage of the SOI technology could not be justified.

The SOI case analysis reflects the most recent activities in silicon fab partnership. Let’s go back and re-examine the emergence of the dedicated silicon fabs that gave rise to the fabless semiconductor company.  The following silicon foundry companies were business “carve-outs/spinoffs” from larger technology congomerates:  (1) Infineon (Siemens), (2) Qimonda (Infineon), (3) Agilent (HP), (4) NXP (Philips), (5) Freescale (Motorola), (6) HiSilicon (Huawei, actually fabless, but noteworthy as a separation of semiconductor component design from the systems business of Huawei), (7) ST-Micro (Thompson), (8) GLOBALFOUNDRIES (AMD) (unique in that a pure play semiconductor component company also found the need to spin off the foundry part of the business).  This is not an exhaustive list, compiled right off the top of my head.  But take a look at this dominant market trend.  None of the technology companies listed in parentheses were able to justify a model where they were able to maintain a profitable core business unless they jettisoned their respective company’s silicon fabrication facilities.  While the IBM Corporation did not spin off its silicon fabs, IBM did form the Microeletronics Business Unit.  IBM Microelectronics does have a fabless business model.  Although, I don’t have specific numbers, my present understanding is that greater than 50% of IBM Microelectronics revenues comes from external customers.

What became clear to all of these semiconductor companies was that the silicon wafer processing valuation point in the semiconductor supply chain was able to achieve so much differentiation, that the resources needed to maintain a competitive advantage could not be profitably integrated into their adjacent core business models.  The cost of capital needed to invest in the fixed costs for silicon foundry operations required a completely different business model, i.e. an amortization of those fixed costs across many customers, a multi-tenant manufacturing facility.  Imagine how other forms of heavy manufacturing implore multi-tenancy in their manufacturing operations.  Some companies just manufacture one modular component that is supplied to multiple customers.  The example that comes to mind here is Cummins Diesel.  Cummins manufactures diesel engines that are used in multiple company’s final assembled industrial and commercial trucks.  Other component suppliers manufacture a catalog of modular assemblies that are sold and distributed to many different systems integrators.  The case of a multi-tenant silicon foundry manufacturing operation is fundamentally different.

In the case of a silicon foundry, the outsourced modularity lies strictly in the process of manufacturing transistors and interconnecting those transistors on a silicon wafer, NOT in the final component’s functionality.  In the previous heavy manufacturing example, it is the modular component’s functionality that has the outsourced value, NOT the process of manufacturing the component. It is important that the reader detect and understand the difference that is cited here.  Why?  Because it is in this subtle, yet vitally important, difference that forces upon Foundry class stakeholders a heterodox dependency for success upon two of the other three much less capital intensive stakeholders, namely, IP Providers and EDA Suppliers.

The key to a silicon foundry’s profitability is maintaining a full capacity of silicon wafers to process.  As to how that manufacturing capacity is filled is left up to a foundry’s sales and marketing teams.  The first order of foundry competitiveness is having a silicon process that has the desired transistor performance and transistor density.  Due to the high fixed costs, consolidation and exits from the foundry market have reduced the number of pure play silicon foundries.  In order to stay in business, every foundry will deliver similar process offerings to their customers.  In no small part due to silicon foundry alliances between multiple foundry companies, the process offerings may be virtually identical as the alliance members will share process intellectual property.  The second order of differentiation for a foundry to a Designer class customer then becomes the foundry’s catalog of FUNCTIONAL semiconductor IP (IP Provider) AND the foundry’s DESIGN REFERENCE FLOW (EDA Supplier).

More to come…

(3) Systems-on-Chip (SoC), an evolution in ultra-large scale integration (IP Provider Class)

 

Systems-on-Chip

Even from the earliest days of my career as a design engineer, I’ve always been a keen observer in design methodology processes associated with integrated circuit design.  Every chip designer had this dream of starting with a clean slate of silicon, designing every logic gate and circuit on the chip.  Ah, how quickly the dreams of youth are shattered! The financial reality of chip design is quickly seared into the mindset of engineering management…these devices are VERY hard to design, and even harder to get right on the first pass of manufacturing, and oh yes, chips are very expensive to manufacture!  By the way, don’t forget that for every product defect that is a result of a malfunctioning integrated circuit component, six (6) to nine (9) months of market “opportunity cost” (oh no!!! not the “opportunity cost” factor!) is lost while the semiconductor component is fixed and undergoing another manufacturing spin.

In today’s fast moving product cycles, a manufacturing re-spin of a chip component targeted for product shipment most likely means that the chip and it”s associated design team will have to find another component to work on.  The reality that set in quickly was that success in chip design and manufacturing was INVERSELY correlated to the amount of INCREMENTAL new functionality that had to be designed into the component.  The more functional design modules of a working chip design that could be re-used in the new chip, the quicker time to market and higher level of confidence in achieving a first pass design. This was the foundation upon which the Semiconductor Intellectual Property (SIP) market was built upon.

The problem with the early days of SIP re-use lay in the lack of on-chip modular interfaces.  Another classic example of Value Chain Evolution(VCE) Theory that was applied to the semiconductor supply chain.  As a system becomes more complex, modularity increases.  Early chip design architectures had crude levels of modularity and hierarchy.  Entire chip design methodologies evolved around modular architectures.  The lack of modularity in early chip design created EXTREMELY hard re–use scenarios that were fraught with as much uncertainty as if the design had been started from scratch.  In fact, what many engineering management teams did not understand in the early days of chip architecture was that free-for-all design modularity schemes and methods by various design engineering teams created nightmarish episodes of attempts by design teams to re-use functionally correct modules in a new chip architecture.  Unless they had lived previous engineering lives integrated circuit design engineers, it was a conundrum to many engineering managers as to why they were still running into the same issues with chip designs that had high percentages of re-used modules, but the design cycle was taking practically the same amount of time.  All of these problems centered around the lack of modular interfaces in evolving chip architectures.

The emergence of Register Transfer Languages (RTL), such as Verilog and VHDL, made great strides in standardizing modular interfaces for chip designers. Subsequent modularity evolutions that quickly materialized gave rise to the concept of a System-on-Chip (SoC). The SoC design mentality changed the approach chip architects used in modularity and hierarchy.  Chips were now being designed with re-use and incremental re-spins in mind. The silicon upon which the circuits were being laid was merely a canvas for the final picture of design.  Take a look at any modern day integrated circuit layout through the lens of a microscope and a layman can quickly identify key components of modularity.

I also do not want to discount the importance of the effect of Moore’s law with respect to the drive toward SoC modularity. The enormous growth in transistor density, that is now at 32nm, has provided such a large canvas of silicon upon which functional capability can be manufactured, it has become extremely difficult to maintain ALL of the needed expertise in house to design and maintain all of the functional modules.  Functionality that was previously contained in integrated circuit components from other semiconductor manufacturers and purchased for final product assembly on a printed circuit board with other integrated circuits, was now being integrated on the SAME integrated circuit.  Semiconductor component companies didn’t really have an option to choose to not move toward SoC scales of integration.  Silicon foundries could not afford to maintain older silicon processes with less transistor densities, forcing component designers into lower transistor geometries.  In order to maintain competitiveness with other component manufacturers, the more functionality that could now be integrated, the more attractive a company’s component would be to systems integrators.

If became clear that VCE Theory would quickly kick into play for SoC component design.  By modularizing the functional component interfaces, a SIP market could now emerge to play a role in providing known, good, tested, high quality SIP to potential component design teams.  How would the SIP provider recoup their investment in the design and test of the SIP products?  Through royalty licensing models extracted at the point of manufacture! I want the reader to remember this key business model concept, namely, the extraction of VALUE AT THE POINT OF MANUFACTURE!! The change in the valuation point of a supply chain, where that which is good enough will be outsourced, is exemplified in no greater fashion than in the case of the emergence of the SIP market and the extraction of compensation through manufacturing volumes!

IP-XACT – XML Schema for SIP Re-use

The red-headed step child of the semiconductor industry?  Not hardly! More…

(4) Silicon Stratus – cloud-based semiconductor design-to-release-manufacturing (Designer Class)

Modern day 32nm Chip Designer

The three (3) previous inflection points in the semiconductor supply chain define unique identity profiles.  The fourth inflection point finalizes the picture of the semiconductor supply chain by focusing upon the Designer class of the development cycle.  Clearly, not a minor player or role, the Designer class is at the center of the semiconductor supply chain.  The Designer class identity profile within the semiconductor supply chain is at the nexus of the three (3) identity profiles. The Designer class is the seed that conceives the rest of the semiconductor supply chain. The other three classes exist BECAUSE of the Designer class. They are the liberating point that gave rise to the emergence of the other three identity profiles of the semiconductor supply chain. I know that is may sound to overtly parochial, but I do mean to impart the importance of this relationship.  We’ve got to recognize when and where the “tail is wagging the dog” in the semiconductor market.

Designers are the primary consumers in the semiconductor supply chain. Think of it as a transformation from the first Ironman suit that Robert Downey Jr. created in the caves of Afghanistan to the final version he cooked up in his basement in Malibu! The Designer class today has stripped themselves of all distractions, leaving only the lean meat of functional design creativity in their domain…ALMOST!! While the Designers have “cleaned up” their design responsibilities, as the centrum of the supply chain, they have created a “head of line blocking”, logistical nightmare that has forced the Designer class into a new management role for each of the three (3) identity profiles’ informational exchange. The informational exchange between Foundries, EDA Suppliers, and IP Providers has become too reliant upon the Designer class to drive their own internal processes. As a result, the Designer must manage the actual semiconductor design-to-release-manufacturing methodology process needed to actually design semiconductor components.

The Designer class has outsourced that which is “good enough” (Foundry, EDA Supplier, and IP Provider) and retained and preserved those key core skills that define who they truly are within the semiconductor supply chain. Now that the Designer class has disintegrated their supply chain into it’s fundamental components, the final step in attaining the Xanadu of productivity is to RE-INTEGRATE the four (4) supply chain identity profiles into a secure, multi-tenant, collaboration IT environment.  This is where the potential of the Internet begins to reach a state of Nirvana for Designers through the realization of a utility computing or cloud computing model for a semiconductor design-to-release-manufacturing WorkFlow-as-a-Service (WFaaS), i.e. a Silicon Stratus!

VCE Theory produces another child in the semiconductor supply chain…the Silicon Stratus!

More to come…

Cloud-based Intellectual Property Rights – Part 1 – Defining “Parametric Security” and Digital-Rights-as-a-Service (DRaaS)

I’m personally energized by the IT industry trend toward cloud computing.  Rather than fearing the new paradigm for utility computing, I see the impact of utility computing as a tremendous leveling of the playing field for virtually EVERY field of engineering research and development.  I’m no different than any other engineering or business manager in sharing the concern for IP security in the cloud.  In fact, it is my very concern for IP security rights that adds to my belief that cloud computing architectures and services with the proper architecture, can INCREASE IP security and governance over a firm’s present distributed, isolated IT systems.  As systems and technology products continue to advance in complexity, the need for engineering collaboration between firms increases in a non-deterministic manner.

I postulate the following emerging dependencies…

Increasing system complexity is positively correlated with the demand for IT processing resources.

(1) Increasing levels of system complexity compel increasing reliance upon computer-aided design and (2) increasing reliance upon computer aided design increases the engineering demand for IT processing resources.

Increasing system complexity subjugates product architectures to the realism of Value Chain Evolution (VCE) theory.

Value Chain Evolution (VCE) Theory necessitates that supply chains effectuate higher degrees of collaboration.  As supply chains increase in complexity, modularity increases.  As modularity increases, key supply chain valuation points become amplified such that what is “good enough” can be outsourced, leaving niche differentiation as the key profit driver.

Cloud computing enterprise architectures can enact major positive changes for BOTH of the above postulates by supplying (1) on-demand, scalable processing resources, (2) providing a secure, multi-tenant design environment through distinct identity profiles for collaboration, and (3) standardized WorkFlow-as-a-Service (WFaaS) design processes.

Cloud Security Is a Recurring Theme

A recurring theme in the resistance to the adoption of cloud computing is always the inherent security risks that come with working in a shared computing resource environment.  Unquestionably, the cloud’s security concerns are certainly valid and warranted.  As a primary inhibitor to the adoption of cloud services by prospective users, cloud security concerns must be addressed by service providers in a transparent and satisfactory manner, lest their days as a service provider become short lived.  While not a closed subject by any means, much has been written about the user and IT requirements for cloud security.  As far as I am aware, very little has been written or debated about what I term parametric security, i.e. security guidelines within prescribed limits and or boundaries.  When a parametric security stereotype is applied to digitally distributed Intellectual Property (IP) rights in collaborative computing clouds, the design pattern that emerges is Digital-Rights-as-a-Service (DRaaS).  In this blog, I will seek to (1) contrast the types of digitally distributed IP that require DRaaS and then (2) discuss the foundational requirements of parametric security in cloud computing enterprise architectures, i.e. the Digital-Rights-as-a-Service (DRaaS) design pattern.

Astute readers will take issue, noting that DRaaS is simply another term for Digital Rights Management (DRM).  Unquestionably, DRM and DRaaS will have similarities, but primarily at an abstract level.  DRM and DRaaS are not two separate pronunciations or language translations of the same word.  If I were looking for a cultural metaphor for DRM and DRaaS, I’d look to the tradition of marriage and it’s diverse associated ceremonial rituals. The general outcome and objectives of marriage are effectively the same for all cultures, yet there are as many cultural and social distinctions in marriage ceremonies as there are cultures.  It is the same with DRM and DRaaS. DRM and DRaaS services are seeking very common objectives, but the implementation methods through which we get to the final result are highly contrasting in nature.  Additionally, as married couples from different cultures often live very different lives from other cultures, so to will the life cycle of parametric security implementations for DRM and DRaaS be varied and operationally unique.

The primary differences in DRM and DRaaS implementations lie in the types of datasets that each parametric security design pattern manages. DRM is typically applied to multi-media files, e.g. music or video files that are digitally distributed. However, digital distribution of Intellectual Property (IP) is not restricted to entertainment genre. Outside of multimedia files, the next set of digitally distributed data that comes to mind is open source software. Software source code has been assigned copyright protections for many years. The open source community has controlled copyright protections on digitally distributed source code through licenses such as the GNU General Public License. The product use of open source software code is ultimately in the form of a compiled binary object file to a target processor architecture or executed on a virtual machine architecture, e.g a JVM.  The resulting execution of the compiled binary on the target processor or virtual machine is obfuscated from the original source code. Dynamic and statically compiled libraries are also distribution mechanisms for open source code.

In these examples, the software source code has undergone one degree of non-orthogonal transformation, i.e. compilation or virtual machine.  In what sense do I mean non-orthogonal?  In the sense that the targeted operational domain of the IP has not changed.  The original high level programming source code was targeted for a Von Neumann style of object code execution.  Even after compilation, it is relatively straightforward to trace source code origins through both functional behavior and static analysis.  A second degree of non-orthogonal transformation in this case could be viewed as a re-compile of the original source code or binaries with compiler optimizations turned on.

This brings us to another type of intellectual property that is distributed in a  source code format, but undergoes a domain transformation that is orthogonal.  While I don’t want to get caught up in the trivialities of my loose definitions of orthogonal versus non-orthogonal domain transformations, a contrasting definition is in order.  In my definition of orthogonal domain transformations, what is input to the transform is not easily recovered or impossible to reverse engineer, i.e. there exists no inverse function.  You may contend that source code compilation of C++ code to a binary has no inverse function for de-compiling if the source code is not compiled with “debug” options.  Strictly speaking, that is true, thus my subjective caveat, “…not easily recovered”.  However, I contend that efforts in reverse engineering observed behavior of object code using processor emulators or virtual machines is not that difficult for trained and experienced eyes.  But, as I said, I don’t want to get caught up in the semantics.

What forms of digitally distributed IP undergo single and multiple degree orthogonal domain transformations?  Engineering or technology intellectual property.

As an electrical engineer and semiconductor component designer, the primary example that comes to my mind, is semiconductor intellectual property that is integrated as part of what component designers classify as Systems-on-Chip (SoC, pronounced ess-ō-see).  There are in fact many other type of engineering intellectual property that are subject to multi-degree orthogonal domain transformations.  My supposition is that if you were to review the patent database and then cross match those patents with computer aided automation design processes that would be applied to that patent’s industry, you would very quickly categorize virtually all intellectual property that that could be protected through a cloud-based DRaaS design pattern.

More to come from this blog entry…

 

Smoothing the transition to the cloud through systems engineering – Visualizing integrated quantitative business process models with qualitative enterprise architectures.

Systems engineers see the world through a lens of optimization.  They seek to quantify that which may not be inherently quantifiable. They’re seeking equilibrium and symbiosis in the context of man and machine interactions. They create visual models in an effort to bring organization, behavioral understanding, and simplicity to complex systems.  As business processes have continued to evolve in complexity, the Object Modeling Group (OMG) developed the Business Process Modeling Notation (BPMN) in an effort help businesses visualize, quantify, optimize, and simulate business processes.  This blog is not about BPMN, but I do encourage readers to spend time examining BPMN semantics and BPMN’s applicability to your company’s business processes.

The question I examine in this blog is how can a business or cloud computing OEM use systems engineering techniques to model the impact of cloud computing on marketing strategy and process efficiency?  Can process efficiency models and cloud enterprise architectures be integrated into interrelated models?  If an IT enterprise architecture should be properly viewed as a corporate strategy (Enterprise Architecture As Strategy, Ross, Weill, Robertson, Harvard Business Press, 2006), then an effort to apply a systems engineering approach to creating a visual, executable model of a company’s cloud computing enterprise architecture will have definable positive effects on Return-On-Investment (ROI), market competitiveness, time-to-market, and financial results.

Those of us in the circles of discussion on cloud computing are comfortable with the technical nuances and distinctions of Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). As such, IaaS, PaaS, and SaaS have become technical terms that have formal classifications and definitions.  This is fine and good, but profitable businesses do not operate on the hope that their customers and products can be well classified.  How do business managers tailor and optimize IaaS, PaaS, ans SaaS services to transition their unique businesses to cloud computing enterprise architectures?  Profitable product development, marketing, customer support, and finance are married to the market conditions and customer profiles that a company relies upon for it’s survival. Today’s complex markets, supply chains, and strategic partnerships are (1) international in scope, have (2) many interrelated business and technical parameters and (3) unforeseen direct and indirect network effects for all stakeholders. Such complexity and factors have already led many companies to quantifying their go-to-market strategies through statistical modeling of market dynamics.  An increasing number of highly successful companies rely upon quantitative marketing metrics to refine their business processes and strategies.  Visualizing these complex business models for ROI analysis brings into play the use of visual programming models such as SysML block and parametric models.

As if the challenges of modeling those fundamental business processes and customer profiles that are well known to a company isn’t daunting enough, the IT industry has introduced a cloud computing tsunami that has thrown yet another major business technology variable into the mix of competitive advantages that cannot be ignored.  In no small part due to the confusion over exactly what cloud computing is (some old school ISVs continue to tell their customers that they have been delivering their software in cloud form for twenty (20) years!), insightful business managers will want to know specifics about how a cloud/utility computing model is used to not only to realize new profits through process optimizations, but enhance their company’s market competitiveness.  Transitioning conventional business models and processes to cloud computing enterprise architectures is not an easy process for businesses.  The non-trivial amount of money (over $500B by 2015) to be invested in cloud infrastructure necessitates accurate, efficient, and optimized cloud computing models that can be used by management.

…is there an innate inevitability the adoption of cloud computing product offerings?

I don’t believe that one can look at cloud computing or utility computing service models as a fad?  There are just too many smart people who have researched cloud computing enterprise architecture financials to discount the numbers as fundamentally flawed.  Some may point to the Internet bubble of 2000 as evidence of a technology fad gone awry.  However, a closer examination of that market implosion reveals not an elemental defect in the impact of the Internet’s technology, but a market overreaction and miscalculation of the market rate of absorption of an Internet-centric economy.  Just as a sponge can absorb only so much water before saturation thereby requiring a larger sponge, the Internet economy had to increase at it’s natural growth rate to continue to absorb the impact of the Internet’s marketing power.  As we have observed, the Internet didn’t go away after the turn of the millenium, rather it has steadily increased it’s expansive domain of influence.  Today, we see yet another major inflection point enabled by the Internet in laying a data communications foundation upon which cloud computing can now flourish as a viable business model for utility computing in ways that were never imaginable through the shared computing architectures of the mainframe computer.

Cloud computing projects should encompass hardware, software, and data-centric models to help enterprise architects define where and how middleware service modules map to a proposed cloud.  UML’s systems engineering profile, SysML, has been tailored to accommodate composite hardware and software interface configurations.  Two key SysML constructs stand out in this context: (1) block diagrams and (2) parametric diagrams.  SysML block diagrams correspond to classical UML class diagrams. SysML blocks are defined with properties, and block properties can be assigned constraints.  SysML properties range from simple types (numbers, strings, lists) to complex types (object, blocks).  This range of property specification allows the systems architect to assign complex constraints that can be factored into hardware performance (IaaS), application programming interfaces (PaaS), and software functionality (SaaS).  Financial analysis is similarly factored into the system model’s block properties and subsequent constraints.  This should be done following well formed object oriented programming practices.  SysML parametric diagrams are then able to visualize a block’s intra-block property interaction and data flow through ports.  The system model’s port connections provide the visualization of the interaction between hardware, software, and financial metrics to create an executable system model that can be “solved” by readily available constraint solving applications, e.g. Wolfram’s Mathematica. The system model’s properties and constraints can be easily iterated (using a spreadsheet to input parameter data sets) for repeated solutions that can then be analyzed for optimal results.

Why go to this effort to interrelate the systems hardware and software characteristics to financial metrics?  Because complex financial models designed into spreadsheets such as Microsoft Excel, are extremely difficult to visualize.  Spreadsheets are also typically programmed through cell dependencies and procedural programming through the use of Visual Basic modules.  Spreadsheet are not inherently designed as objects that fall into a clean object oriented programming model.  Object oriented programming was designed to address complexity, and visual programming languages like SysML, were designed to visualize object oriented programming complexity.  Using SysML block and parametric diagrams, cloud computing enterprise architects have the ability to visualize complex object interactions between configurable composite hardware/software set of metrics and accepted financial metrics used in business case analysis.

Example of a Financial SysML Block Diagram Instance with Parametric Constraints

Can closed loop feedback (output parameters fed back into the input) be incorporated into SysML models to provide optimized solutions?  Not in a deterministic sense, nor would I recommend attempting  any non-deterministic practices for a closed form solution.

As with any social science, the “long tails” of statistical models can never be discounted, nor can quantitative models completely factor out risk (note Long Term Capital Management).  Business success is ultimately about managing market expectations.  Quantitative models can help us make human decisions that either positively or negatively affect our target market’s expectations.

By incorporating the effects of hardware, software, and middleware marketing and performance metrics into the the business’ financial transactions a highly robust ROI analysis can be performed well before millions of dollars in capital outlay has to be transacted.  Such contemplative, structured analyses will always result in better management decisions, while visualizing system interaction between operational and financial interdependencies will provide a more optimized and collaborative enterprise architecture.

Questions for consideration in future blogs…

  • When, where, and how should a company undertake a formal process for mapping their business model to a cloud computing enterprise architecture?
    • Can a formal process for such a mapping be defined?
  • Is there a way to integrate quantitative financial ROI metrics with qualitative interface model frameworks that can offer more insight and behavioral understanding through systems engineering practices?
  • How can systems engineering techniques be applied to business transformation process models?
    • If so, what would such a representative model look like?
  • Are there any implications in data models (XML, JSON) in systems engineering, i.e. aspect-oriented/cross-cutting concerns/programming methods that should be considered?
  • Are certain industry sector or supply chains profiles more apt to benefit from cloud computing than others?
    • What role does creativity play in applying cloud computing in developing the business model?
      • While quantitative (left brain) thinking becomes commoditized, qualitative (right brain) thinking to create business models may be the key to differentiating the application of technology solutions.
      • Qualitative thinking is highly visual in nature.
  • New cloud computing design patterns such as Work-Flow-as-a-Service (WFaaS) and Digital-Rights-as-a-Service (DRaaS) must be modeled in the context of enterprise frameworks for IaaS, PaaS, and SaaS
    • Digital-Rights-as-a-Service (DRaaS)
      • This is not DRM for multi-media, but cloud-based intellectual property rights for scientific, engineering, or technology data.
      • New term for cloud-based technology intellectual property, what is it?