Category: Research

Bridging the technological “valley of death”

Imagine if the greatest technological inventions in history—such as the steam engine, electricity, or the Internet—had been left stranded in research laboratories and never transferred to society. The world would look radically different today. There are plenty of untold stories of inventions that never saw the light of day, and many potentially breakthrough ideas fall into the so-called technological “valley of death” due to a gap between academic research and industrial commercialization. This is a missed opportunity for economic and social progress. In this article, I offer five actions for academia and industry to bridge the valley of death and co-create innovation.

In recent years, the public has often associated technological innovation with the most successful private companies in the world, such as Apple, Google, and Microsoft. These organizations have built an entrepreneurial culture characterized by networks (rather than hierarchies), empowerment (rather than control), and agile execution (rather than rigid planning). While these factors play a role in fostering technological innovation, one of the fundamental drivers remains technology research. Unlike traditional research—which aims at obtaining new knowledge about the real world—technology research aims at producing new and better solutions to practical problems.

Consider the disruption brought to the smartphone market by the iPhone. The user experience of Apple’s first smartphone was undoubtedly superior to the competition, and this was probably the recipe for its success. However, a significant portion of the technologies leveraged by the iPhone was originally conceived in research conducted by the public sector. The programming languages used in the iPhone, for instance, have roots in the research conducted between the 1950s and the 1980s by multiple organizations worldwide, including the Massachusetts Institute of Technology and the Norwegian Computing Center.

The journey of new technology from research to commercialization goes through a number of technology readiness levels (TRLs). These levels were developed at NASA between the 1970s and the 1990s, and indicate the maturity of technologies. The latest version of the scale from NASA includes nine TRLs and has gained widespread acceptance across governments, academia, and industry. The European Commission adopted this scale in its Horizon 2020 program.

The nine technology readiness levels are:

  1. Basic principles observed
  2. Technology concept formulated
  3. Experimental proof of concept
  4. Technology validated in a lab
  5. Technology validated in a relevant environment
  6. Technology demonstrated in a relevant environment
  7. System prototype demonstrated in an operational environment
  8. System complete and qualified
  9. System deployed in an operational environment

Some tech giants are large enough to fund research units and work at all levels of this scale, but most companies cannot afford the high investments and specialized competencies this approach requires. Either way, in today’s dynamic and volatile markets they have to aim at doing the disrupting in order to avoid being disrupted. Companies that cannot afford research units have to innovate by relying on research conducted elsewhere, and the natural candidate is academia.

The question is: at which TRL does academia transfer technology to industry?

Academia tends to focus on TRLs 1–4, whereas industry prefers to work with TRLs 7–9, rarely 6. Therefore, TRLs 4–6 represent a gap between academic research and industrial commercialization. This gap is colloquially referred to as the technological “valley of death” to emphasize that many new technologies reach TRLs 4–6 and die there.

Technology Readiness Levels in Academia and Industry

The issue of the valley of death has been studied extensively, and the scientific literature offers several proposals for bridging the gap. With the aim of facilitating technology transfer, some research organizations have focused entirely on technology research in the last 50 years. The Fraunhofer Society in Germany and SINTEF in Norway are notable examples. Still, these efforts have not entirely bridged the gap between academia and industry.

So, how can academia and industry bridge the technological valley of death and co-create innovation?

1. Academia and industry should better understand each other’s culture

To paraphrase a well-known proverb: academia is from Mars and industry is from Venus. This is not only reflected in the way people work but also the way people communicate and possibly dress. The following statements are deliberate exaggerations, but they allow pondering on the cultural differences in the status quo. Academics have a long-term horizon; practitioners have a short-term horizon. Academics think (often critically); practitioners do (often routinely). Academics are pedantic; practitioners are practical. And the list could go on and on… Note that academia is not superior to industry, and industry is not superior to academia. They are just different, and this diversity can be healthy to drive technological innovation. Understanding each other’s culture is of paramount importance to improve the collaboration across teams on both sides.

2. Academics should better understand real-world industrial challenges

Academic research tends to aim at academic communities. As a consequence, an increasing number of research papers do not have adequate industrial relevance. As Lionel Briand—a recognized professor and researcher in software engineering—recently put it: “If a solution to a problem is not applicable and scalable, at least in some identifiable context, then the problem is not solved. It does not matter how well written and sound the published articles are, and how many awards they received.” Academics should join industrial fora (conferences, seminars, etc.) more often in order to understand real-world industrial challenges.

3. Practitioners should stay up-to-date with the state-of-the-art

The fact that certain research papers do not have an adequate industrial relevance does not mean that industry cannot resort to the scientific literature to find answers to its questions. Quite the contrary: the answer is often available in peer-reviewed journals, many of which are openly accessible. Consider, for instance, password expiration policies. There is evidence that changing passwords on a regular basis does more harm than good since it drives users to new passwords that can be predicted based on previous passwords. Nevertheless, most IT departments keep enforcing this policy rather than adopting modern authentication solutions. Imagine if medical doctors would neglect research results in the same way… Unrealistic, right? Practitioners should join academic fora and consult the scientific literature more often in order to stay up-to-date with the state-of-the-art.

4. Industry should hire more PhDs

Besides the qualities that may make PhDs more valuable than other job candidates, people who invested at least three years of their lives researching in academia know the environment well enough to foster: the identification of research trends, the understanding of research results, and—most of all—the collaboration with academia. In the process of academic inflation given by a steadily increasing number of doctorates every year, the academic pyramid is getting wider, and not all PhDs have the opportunity to access permanent academic jobs. Industry has the chance to capitalize on this trend by hiring more PhDs.

5. Academia and industry should conduct more joint research projects

Academia and industry can organize their joint research projects based on two models: 1) bilateral collaboration, where both academia and industry provide their contribution in the form of cash or in-kind, or 2) research projects partly funded by governmental organizations. The latter option may be better suited in case of tight budgets and lack of experience with this kind of collaboration.

Research is by definition unpredictable, and research projects cannot be assessed based on cost-benefit analysis. Nevertheless, statistics on past research projects show high long-term return on investment. The European Commission and the Norwegian Research Council (as well as other governmental organizations around the world) have a wide variety of research programs in a heterogeneous set of domains, which should be sufficient to cover the needs of most establishments. The competition for accessing funding has increased dramatically in the last decade, so the stakeholders should pick their partners and prepare their funding applications carefully.

While the success stories of fruitful collaboration between academia and industry are encouraging, there is a lot of untapped potential in the synergy between the two. Understanding each other’s culture, joining each other’s fora, leveraging PhDs’ competence and skills, and conducting joint research projects are certainly steps in the right direction. Technology has been responsible for tremendous economic growth and increased quality of life for billions of people. Bridging the technological valley of death is definitely worth solving, regardless of whether the ultimate goal is an increased profit or the greater good.

A shorter version of this article was published in Norwegian in Teknisk Ukeblad on February 2019.

Opinion: Why is model-driven engineering unpopular in industry, and what can we do about it?

Model-driven engineering (MDE)1 is a branch of software engineering that aims to improve the effectiveness and efficiency of software development by shifting the paradigm from code-centric to model-centric.

I have been in the MDE community on and off for about 15 years. My supervisor at the University of L’Aquila, Alfonso Pierantonio, introduced me to MDE in 2003. Back then, the approach was still in its infancy and was not even called model-driven engineering. I wrote my Bachelor thesis in 2003 on code generation based on Unified Modeling Language (UML), and my Master thesis in 2006 on model versioning.

During my four years as a PhD candidate at the University of Bergen, I researched formal aspects of model versioning and multi-level modeling, and successfully defended my PhD thesis in 2011. During my following four+ years as a researcher at SINTEF, I conducted applied research on domain-specific languages and models@run-time for automating cloud management platforms. I also compared two-level and multi-level techniques for modeling cloud application topologies. My work has led to several publications in journals and conference proceedings.

Eventually, I decided to come back to the business world, with the aim of transferring these research results to industry. As an advisor and manager at Norway’s largest IT organizations, I have worked with architectures and solutions as well as trained colleagues and clients. While I did not expect MDE to be widespread, I did expect UML and domain-specific languages (DSLs) to be an integral part of these activities. Unfortunately, I have been disappointed.

What’s the problem?

Take my statements with a grain of salt—this is an opinion piece. My personal experience is not representative of the whole industry, and IT professionals working in other domains may have a widely different experience. Nevertheless, here is what I learned:

Industry does not use UML or uses it poorly

For the majority of IT professionals I have met (architects, developers, operators, etc.), “modeling” means drawing boxes and arrows on PowerPoint or similar applications. Only about 20% of IT professionals I have met are familiar with UML, and only a fraction of them uses it, in a few cases with questionable results. “I’ve seen things you people wouldn’t believe…” including diagrams mixing the syntaxes of use case, component, and sequence diagrams.

The lack of adoption of UML or any other general-purpose modeling language is problematic. It introduces ambiguities and increases the chances for misunderstandings.

Industry uses DSLs but does not know about it

The majority of IT professionals I have met use DSLs regularly. However, none of them knew what a DSL was before I told them.

In the field of cloud services, AWS Cloud Formation, Azure Resource Manager, or Google Cloud Deployment Manager, and the OASIS Topology and Orchestration Specification for Cloud Applications (TOSCA) are all examples of DSLs.  However, developers and operators often regard the specifications written using these DSLs as configuration files rather than models.

The lack of understanding of DSLs and modeling in general leaves a lot of untapped potential. The abstract syntax and semantics of models enable automated reasoning and facilitate automation, which brings me to the following point.

Industry does not exploit models@run-time and automation

The majority of IT professionals I have met use different data structures at design-time and run-time in their systems, which is time-consuming, error-prone, and hinders automation.

Models@run-time provides an elegant solution to this problem. It enables a consistent representation of both design-time and run-time information.

The deceit of the 2000’s, which envisioned a utopian world where software engineers would model 80% of their time and generate 80% of the source code with model-to-text transformation, has never become a reality. Nevertheless, models@run-time has gradually become part of self-adaptive systems, not only in prototypes but also in production.

Several cloud management platforms, such as the TOSCA-based Cloudify, have successfully applied this technique. Developers and operators specify cloud application topologies as models at design-time, while monitoring and adaptation engines programmatically manipulate these models at run-time. This is also the foundation of several AIOps platforms.

Similar to the point above, the lack of understanding of models@run-time and automation is a missed opportunity.

What do we do about it?

Taking my experience into account, here is what I would recommend to academia, standardization organizations, and tool vendors:

Teach modeling and MDE in Bachelor’s degrees

I believe that fundamental concepts such as abstraction, modeling, metamodeling, descriptive and prescriptive models, concrete and abstract syntax, structural and attached constraints, informal and formal semantics, linguistic and ontological typing, conformance, to name a few, should be well understood by all future IT professionals. Therefore, I believe that they should be part of the curriculum of any computer science and software engineering degree, even at the Bachelor’s level.

At the universities where I have studied and worked, modeling is partly introduced in software engineering courses and further elaborated in MDE courses. The problem is that software engineering courses tend to focus on using UML for documentation purposes, while MDE courses typically belong to Master’s programmes only.

Fix UML

When I studied UML during my Bachelor’s degree, one of the elements that baffled me was aggregation, graphically represented by a hollow diamond shape. Its concrete syntax was akin to the one of composition, graphically represented by a filled diamond shape. However, its semantics was indistinguishable from the one of association.

Martin Fowler discussed this issue in his book UML Distilled already 15 years ago: “Aggregation is the part-of relationship. It’s like saying that a car has an engine and wheels as its parts. This sounds good, but the difficult thing is considering what the difference is between aggregation and association. In the pre-UML days, people were usually rather vague on what was aggregation and what was association. Whether vague or not, they were always inconsistent with everyone else. As a result, many modelers think that aggregation is important, although for different reasons. So the UML included aggregation […] but with hardly any semantics. As Jim Rumbaugh says, ‘Think of it as a modeling placebo’.

In other words, aggregation is meaningless. If you disagree, I challenge you to provide me with definitions of the semantics of aggregation and association, where the distinction between the two is unambiguous.

Unfortunately, OMG keeps the concept in UML and even researchers in the community keep using it. In fact, I have witnessed UML class diagrams containing aggregation at many of the conferences on MDE I attended between 2008 to 2016. Worst of all, I have seen UML class diagrams containing both aggregation and association. Either the authors believed there is a difference between aggregation and association, or they confused aggregation with composition.

Aggregation is just one example. The syntax and semantics of several other concepts in UML are questionable as well. If UML is the best general-purpose modeling language the MDE community can offer, and if not even researchers in the community can use it properly, it is fair to expect that industry will either avoid UML or struggle using it.

Improve frameworks for DSLs and models@run-time

During my years as a researcher, I defined the syntax of two DSLs and contributed to the corresponding models@run-time environments. Eclipse Modeling Framework (EMF) and Connected Data Objects (CDO) have come a long way since their inception. However, even for IT professionals with a PhD on the subject, implementing DSLs and models@run-time environments with today’s frameworks requires a considerable amount of effort.

My co-authors and I have discussed this issue in one of our papers: “Our assessment is that EMF and CDO are well-suited for DSL designers, but less recommendable for developers and even less suited for operators. For these roles, we experienced a steep learning curve and several lacking features that hinder the implementation of models@run-time […]. Moreover, we experienced performance limitations in write-heavy scenarios with an increasing amount of stored elements.

Perhaps some of these issues have already been addressed by now. Nevertheless, I believe the frameworks for DSLs and models@run-time have to reach a much higher maturity level for industry to use them in production.

Reduce research on model transformation

During my years in academia, I have seen all sorts of research papers on model transformation. My gut feeling is that, for any model transformation need, there is a technique for that in the literature. The question is: to what extent are these techniques adopted? I suspect that the most optimistic researchers in the community expected model transformations to become as widespread as compilers. Well, they have not.

I believe the MDE community should rather concentrate on leveraging existing techniques to address concrete limitations in their frameworks. In 2018, even something as simple as renaming an element of an Ecore model does not lead to the automatic update of the attached OCL constraints.

Do not get me wrong—I am guilty myself, as I have published at least four papers on model transformation that did not achieve the impact I wished. Perhaps researchers in the community should move to other research topics, which brings me to the following point.

Increase research on multi-level modeling

As a researcher, I enjoyed working with multi-level modeling. My intuition is that this technique should be the golden standard for self-adaptive systems.

My co-authors and I have researched in the field of cloud management platforms, and the results are promising: “a smaller language definition in the multi-level case, with some other benefits regarding extensibility, flexibility, and precision.

Despite these results, I believe the road ahead is long. The frameworks for multi-level modeling need to reach a sufficient technology readiness level for this technique to come out of the academic niche. I wish the multi-level modeling community would be larger.

Conclusion

While I still believe MDE to be the best solution in several problem domains, I am afraid that the approach is stuck in the technological “valley of death.” To bridge this gap, academia and industry should collaborate more closely in the future. What do you think?

Footnotes

  1. Some researchers in the field would argue that this approach is not an engineering discipline and that it should be called model-driven development (MDD) instead. The Oxford English Dictionary defines engineering as “the branch of science and technology concerned with the design, building, and use of engines, machines, and structures.” Considering that software and data are in fact structures, I am perfectly comfortable with the term model-driven engineering, and I will not distinguish between MDE and MDD.

I am joining PwC Consulting

Today I have had my last day at EVRY. The picture shows a glimpse of my farewell gathering, where I presented the history of chocolate-hazelnut spreads. Because sometimes, you have to have fun at work!

Me presenting the history of chocolate-hazelnut spreads at EVRY
Me presenting the history of chocolate-hazelnut spreads at EVRY

It has been great getting to know my colleagues during my time with the company. While I am excited about the new opportunity ahead of me, leaving excellent working relationships is bittersweet.

In two weeks I will start as a Senior Manager in the Business Technology group at PwC Consulting in Oslo. I look forward to learning more about new domains, about management, and about myself.

Double standards in Norwegian environmental culture?

Norway: Environmental hero or hypocrite?” was the question the Financial Times asked a year ago. As a resident in Norway for the last decade and with a background of research and innovation, I have long been concerned with the same question.

Norway has implemented a number of measures for a green shift. For example, power generation is mainly based on renewable sources, and the number of electric cars per capita is the largest in the world. Nevertheless, the waste volume in Norway increased by 7% while recycling decreased 1% from 2013 to 2014.

I was a research scientist at SINTEF between November 2012 and February this year. During these years, I have been concerned that there are no trash cans for sorting food waste, plastic, bottles, glass and metal, while there are plastic cups in each kitchen at the SINTEF offices in Oslo.

I believe that an organisation researching technology to fight global warming should “eat its own dog food”. A year ago, I suggested that the SINTEF administration in Oslo should reduce waste volume and increase recycling. Despite multiple reminders, they have never returned to me.

SINTEF is probably not the only organisation that does not sort its waste, but if not even a research organisation takes responsibility for the environment, then Norway has a problem with environmental culture.

While we wait for the authorities to force businesses to tackle the problem, tonnes of recyclable trash are thrown away as mixed waste every day. Is it not time to quit the double standards and actually start implementing a comprehensive green shift? The alternative is to get a reputation that is hard to get rid of: being an environmental hypocrite.

The original version of this article was published in Norwegian in Aftenposten on June 2017.

A step forward in my career

Dear friends and colleagues,

I am excited to announce that I have accepted a position as Senior Advisor at EVRY Cloud Services in Oslo. EVRY is the largest IT company in Norway and among the largest in the Nordics with almost 10 000 employees. Cloud Services is a new business area of EVRY that offers advisory and consulting services to assist customers with designing the cloud solutions that best fit their requirements. I will start on the 1st of March, and I am looking forward to it!

EVRY at Fornebu

I would like to thank SINTEF for four rewarding years with the organisation; I will try to be a good ambassador for SINTEF in the future.

With optimistic wishes,
Alessandro