I recently got certified in Microsoft Azure Fundamentals and AWS Certified Cloud Practitioner. Hoping to help others who plan to do the same, I wrote an article on LinkedIn on how to get certified in two weeks. Your feedback is welcome!
A few of you noticed that I disappeared from Facebook, and asked me if everything is fine with me. Let me start by reassuring that everything is fine with my loved ones and me, both in Norway and Italy. That said, the reason why I deleted my Facebook account is that I cannot stand the disinformation and divisiveness on my feed anymore.
The Covid-19 pandemic has led to a new wave of conspiracy theories: from 5G antennas transmitting the virus via radio waves, to Bill Gates having engineered Covid-19, to the evergreen new world order orchestrating all of this. The echoes of the Brexit referendum and the last U.S. presidential elections are not hard to spot. It is the pitting of “skepticism” against “experts,” and of “people” against “elite.”
While conspiracy theories have long existed, Facebook and other social media have accelerated their circulation. Moderating content after it is shared thousands of times is insufficient. Curating knowledge before it is shared is just as crucial to contain disinformation. But the tech giant is failing at it spectacularly.
In 2016, Cambridge Analytica illicitly harvested data to produce the political profile of millions of Facebook users and target them with fear-mongering ads based on lies. As Facebook’s design fosters echo chambers—where outside views are discredited—these ads were remarkably effective.
As Carole Cadwalladr put it: Maybe you think, “Well, it was just a few ads. And people are smarter than that, right?” To which I would say, “Good luck with that.” The Brexit referendum and the last U.S. presidential election have already demonstrated that liberal democracy is broken.
I was naïve enough to hope that the Covid-19 pandemic would restore some trust in reliable, fact-based sources of information. I could not be more wrong.
A growing number of people experience a sense of lack of control in their lives (e.g., long-term unemployment), and share conspiracy theories to gain a compensatory illusion of control. Detecting patterns where there are, in fact, none at least leaves this possibility open.
Unfortunately, in a post-truth society that is based more on collective- than individual identities, conspiracy theories spread like wildfire because they serve as weapons in a tribal war.
Facebook has known about this for a while. “Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” Facebook would feed users “more and more divisive content to gain user attention and increase time on the platform.” Nevertheless, Facebook shut down the efforts to make the site less divisive.
Four years after the Cambridge Analytica scandal, and six months after the start of the Covid-19 pandemic, very little has changed. Despite multinational companies now pulling ads from Facebook over inaction on hate speech, the tech giant is still doing too little to prevent disinformation and divisiveness. And what many people do not seem to understand is that this is bigger than any of us.
The president of the most powerful country in the world is an anti-intellectual who suggests curing Covid-19 with disinfectant injections and brags about his “tremendous job” in handling the pandemic, despite the U.S. topping all charts about infections and deaths. The silly movie Idiocracy from 2006 does not seem so unrealistic anymore. Before you realize it, another representative of the Dunning–Kruger effect could be in charge of your country.
Now, if this doesn’t scare you, I don’t know what will.
Stopping using Facebook will not help fighting disinformation and divisiveness. Quite the contrary. But at least I will avoid everyday frustrations and invest my time more wisely.
Feel free to contact me on iMessage and Signal.
Stay safe and have a great summer!
The COVID-19 crisis has pushed a digital transformation in record time. Traditional work dynamics are gone, and many of us see now our homes as where we work.
We have never been more dependent on our employer’s IT solutions. While some tackle the transition to remote work without disruptions, others experience frustration with outdated devices and unstable services. This frustration gets compounded by dealing with non-technical issues such as suboptimal ergonomics or demanding children.
Most people are used to a seamless user experience in their private lives. Despite a global explosion in service demand around the Internet, they continue to have video calls on FaceTime, share photos on Instagram, and watch movies on Netflix, exactly like before.
Employees expect comparable user experiences in their working lives. If it is poor, some may use private devices and services as alternative IT solutions. This workaround is known as “shadow IT” and it can compromise the organization’s compliance and security. The consequences from shadow IT can quickly become expensive.
A sustainable IT architecture is crucial for realizing an effective digital workplace. It must balance scalability, security, and user experience. Here is how you, as CIO, can move forward to provide a digital workplace to your employees:
1. Migrate from on-premises to the cloud
Your organization probably still relies on on-premises infrastructure to operate business services, as well as end-user services such as connectivity, communication, and collaboration.
One of the issues with on-premises infrastructure is that you have to purchase and install servers, storage, networking, and other computing resources to expand its capacity—a process that may take months in normal circumstances, let alone during a pandemic.
When the actual demand surpasses the available capacity, you have got a problem. The typical consequence is that you lose customers. In this scenario, you lose employees too, as they are unable to work.
Adopt hyper-scale infrastructure and software-as-a-service
The cloud offers hyper-scale infrastructure, which allows you to rapidly provision computing resources with minimal management effort.
When exploiting automation, the available capacity can automatically scale up and down based on the actual demand. In this case, you can cope with unexpected spikes with minimal downsides, such as losing employees on the way.
Microsoft with Microsoft 365 and Google with G Suite offer cloud-based collaboration platforms. Other software-as-a-service providers cover plenty of additional capabilities for a digital workplace. You may want to widen their adoption in your organization.
2. Switch from perimeter- to zero trust security
Your organization needs to protect business and end-user services through an adequate security model.
Not unlike a medieval castle, the perimeter security model relies on a secured boundary between the private side of a network, often the intranet, and the public side of a network, often the Internet.
Virtual private network (VPN), firewall, and other tools protected your IT environment from cyber threats relatively well during the ‘90s and 2000s, when you had few entry points for users and devices. However, these tools were not designed for your current IT environment, where you have heterogeneous users and devices, both on-premises and in the cloud.
Authenticate, authorize, and verify all users and devices
The zero trust security model relies on the principle that you should not trust anyone, neither inside nor outside your perimeter. You should instead authenticate, authorize, and verify all users and devices attempting to access your systems.
Identity and access management (IAM), multi-factor authentication (MFA), unified endpoint management (UEM), encryption, and other tools are better suited for this purpose. These tools allow you to flexibly combine legacy systems on-premises and modern systems in the cloud with heterogeneous users and devices.
Microsoft, Google, and other major IT players recommend the zero trust security model. They adopt it in their own companies. You may want to accelerate its implementation in your organization.
3. Focus on the user experience
Your organization may have neglected the user experience of IT services. IT departments may offer employees a one-size-fits-all PC, while alternatives such as Macs and iPads may be non-supported options. Some business applications may have cumbersome user interfaces. Others may require employees to access outdated remote desktop solutions. Some communication and collaboration applications may provide redundant functionalities, which can leave employees unsure about when to use what.
Avoid shadow IT
If this goes too far, employees could feel like the IT department provided them with tools that diminish rather than enhance their productivity. Their frustration could even lead them to shadow IT, with potentially severe consequences.
There are more efficient ways of working, which can raise employee engagement. To succeed, you need to provide a quality of service comparable to the ones of Apple and Google, whose devices and services “just work.” In other words, you need to provide state-of-the-art hardware, software, and underlying IT architecture.
A digital workplace requires a comprehensive approach to delivering a consumer-oriented IT environment, where the technology is continuously improved, and the workforce is frequently upskilled.
This approach is ambitious, and is going to cost—but it may be the most forward-looking investment you can make.
A shorter version of this article was published in Norwegian in Teknisk Ukeblad on May 2020.
Imagine if the greatest technological inventions in history—such as the steam engine, electricity, or the Internet—had been left stranded in research laboratories and never transferred to society. The world would look radically different today. There are plenty of untold stories of inventions that never saw the light of day, and many potentially breakthrough ideas fall into the so-called technological “valley of death” due to a gap between academic research and industrial commercialization. This is a missed opportunity for economic and social progress. In this article, I offer five actions for academia and industry to bridge the valley of death and co-create innovation.
In recent years, the public has often associated technological innovation with the most successful private companies in the world, such as Apple, Google, and Microsoft. These organizations have built an entrepreneurial culture characterized by networks (rather than hierarchies), empowerment (rather than control), and agile execution (rather than rigid planning). While these factors play a role in fostering technological innovation, one of the fundamental drivers remains technology research. Unlike traditional research—which aims at obtaining new knowledge about the real world—technology research aims at producing new and better solutions to practical problems.
Consider the disruption brought to the smartphone market by the iPhone. The user experience of Apple’s first smartphone was undoubtedly superior to the competition, and this was probably the recipe for its success. However, a significant portion of the technologies leveraged by the iPhone was originally conceived in research conducted by the public sector. The programming languages used in the iPhone, for instance, have roots in the research conducted between the 1950s and the 1980s by multiple organizations worldwide, including the Massachusetts Institute of Technology and the Norwegian Computing Center.
The journey of new technology from research to commercialization goes through a number of technology readiness levels (TRLs). These levels were developed at NASA between the 1970s and the 1990s, and indicate the maturity of technologies. The latest version of the scale from NASA includes nine TRLs and has gained widespread acceptance across governments, academia, and industry. The European Commission adopted this scale in its Horizon 2020 program.
The nine technology readiness levels are:
- Basic principles observed
- Technology concept formulated
- Experimental proof of concept
- Technology validated in a lab
- Technology validated in a relevant environment
- Technology demonstrated in a relevant environment
- System prototype demonstrated in an operational environment
- System complete and qualified
- System deployed in an operational environment
Some tech giants are large enough to fund research units and work at all levels of this scale, but most companies cannot afford the high investments and specialized competencies this approach requires. Either way, in today’s dynamic and volatile markets they have to aim at doing the disrupting in order to avoid being disrupted. Companies that cannot afford research units have to innovate by relying on research conducted elsewhere, and the natural candidate is academia.
The question is: at which TRL does academia transfer technology to industry?
Academia tends to focus on TRLs 1–4, whereas industry prefers to work with TRLs 7–9, rarely 6. Therefore, TRLs 4–6 represent a gap between academic research and industrial commercialization. This gap is colloquially referred to as the technological “valley of death” to emphasize that many new technologies reach TRLs 4–6 and die there.
The issue of the valley of death has been studied extensively, and the scientific literature offers several proposals for bridging the gap. With the aim of facilitating technology transfer, some research organizations have focused entirely on technology research in the last 50 years. The Fraunhofer Society in Germany and SINTEF in Norway are notable examples. Still, these efforts have not entirely bridged the gap between academia and industry.
So, how can academia and industry bridge the technological valley of death and co-create innovation?
1. Academia and industry should better understand each other’s culture
To paraphrase a well-known proverb: academia is from Mars and industry is from Venus. This is not only reflected in the way people work but also the way people communicate and possibly dress. The following statements are deliberate exaggerations, but they allow pondering on the cultural differences in the status quo. Academics have a long-term horizon; practitioners have a short-term horizon. Academics think (often critically); practitioners do (often routinely). Academics are pedantic; practitioners are practical. And the list could go on and on… Note that academia is not superior to industry, and industry is not superior to academia. They are just different, and this diversity can be healthy to drive technological innovation. Understanding each other’s culture is of paramount importance to improve the collaboration across teams on both sides.
2. Academics should better understand real-world industrial challenges
Academic research tends to aim at academic communities. As a consequence, an increasing number of research papers do not have adequate industrial relevance. As Lionel Briand—a recognized professor and researcher in software engineering—recently put it: “If a solution to a problem is not applicable and scalable, at least in some identifiable context, then the problem is not solved. It does not matter how well written and sound the published articles are, and how many awards they received.” Academics should join industrial fora (conferences, seminars, etc.) more often in order to understand real-world industrial challenges.
3. Practitioners should stay up-to-date with the state-of-the-art
The fact that certain research papers do not have an adequate industrial relevance does not mean that industry cannot resort to the scientific literature to find answers to its questions. Quite the contrary: the answer is often available in peer-reviewed journals, many of which are openly accessible. Consider, for instance, password expiration policies. There is evidence that changing passwords on a regular basis does more harm than good since it drives users to new passwords that can be predicted based on previous passwords. Nevertheless, most IT departments keep enforcing this policy rather than adopting modern authentication solutions. Imagine if medical doctors would neglect research results in the same way… Unrealistic, right? Practitioners should join academic fora and consult the scientific literature more often in order to stay up-to-date with the state-of-the-art.
4. Industry should hire more PhDs
Besides the qualities that may make PhDs more valuable than other job candidates, people who invested at least three years of their lives researching in academia know the environment well enough to foster: the identification of research trends, the understanding of research results, and—most of all—the collaboration with academia. In the process of academic inflation given by a steadily increasing number of doctorates every year, the academic pyramid is getting wider, and not all PhDs have the opportunity to access permanent academic jobs. Industry has the chance to capitalize on this trend by hiring more PhDs.
5. Academia and industry should conduct more joint research projects
Academia and industry can organize their joint research projects based on two models: 1) bilateral collaboration, where both academia and industry provide their contribution in the form of cash or in-kind, or 2) research projects partly funded by governmental organizations. The latter option may be better suited in case of tight budgets and lack of experience with this kind of collaboration.
Research is by definition unpredictable, and research projects cannot be assessed based on cost-benefit analysis. Nevertheless, statistics on past research projects show high long-term return on investment. The European Commission and the Norwegian Research Council (as well as other governmental organizations around the world) have a wide variety of research programs in a heterogeneous set of domains, which should be sufficient to cover the needs of most establishments. The competition for accessing funding has increased dramatically in the last decade, so the stakeholders should pick their partners and prepare their funding applications carefully.
While the success stories of fruitful collaboration between academia and industry are encouraging, there is a lot of untapped potential in the synergy between the two. Understanding each other’s culture, joining each other’s fora, leveraging PhDs’ competence and skills, and conducting joint research projects are certainly steps in the right direction. Technology has been responsible for tremendous economic growth and increased quality of life for billions of people. Bridging the technological valley of death is definitely worth solving, regardless of whether the ultimate goal is an increased profit or the greater good.
A shorter version of this article was published in Norwegian in Teknisk Ukeblad on February 2019.
Model-driven engineering (MDE)1 is a branch of software engineering that aims to improve the effectiveness and efficiency of software development by shifting the paradigm from code-centric to model-centric.
I have been in the MDE community on and off for about 15 years. My supervisor at the University of L’Aquila, Alfonso Pierantonio, introduced me to MDE in 2003. Back then, the approach was still in its infancy and was not even called model-driven engineering. I wrote my Bachelor thesis in 2003 on code generation based on Unified Modeling Language (UML), and my Master thesis in 2006 on model versioning.
During my four years as a PhD candidate at the University of Bergen, I researched formal aspects of model versioning and multi-level modeling, and successfully defended my PhD thesis in 2011. During my following four+ years as a researcher at SINTEF, I conducted applied research on domain-specific languages and models@run-time for automating cloud management platforms. I also compared two-level and multi-level techniques for modeling cloud application topologies. My work has led to several publications in journals and conference proceedings.
Eventually, I decided to come back to the business world, with the aim of transferring these research results to industry. As an advisor and manager at Norway’s largest IT organizations, I have worked with architectures and solutions as well as trained colleagues and clients. While I did not expect MDE to be widespread, I did expect UML and domain-specific languages (DSLs) to be an integral part of these activities. Unfortunately, I have been disappointed.
What’s the problem?
Take my statements with a grain of salt—this is an opinion piece. My personal experience is not representative of the whole industry, and IT professionals working in other domains may have a widely different experience. Nevertheless, here is what I learned:
Industry does not use UML or uses it poorly
For the majority of IT professionals I have met (architects, developers, operators, etc.), “modeling” means drawing boxes and arrows on PowerPoint or similar applications. Only about 20% of IT professionals I have met are familiar with UML, and only a fraction of them uses it, in a few cases with questionable results. “I’ve seen things you people wouldn’t believe…” including diagrams mixing the syntaxes of use case, component, and sequence diagrams.
The lack of adoption of UML or any other general-purpose modeling language is problematic. It introduces ambiguities and increases the chances for misunderstandings.
Industry uses DSLs but does not know about it
The majority of IT professionals I have met use DSLs regularly. However, none of them knew what a DSL was before I told them.
In the field of cloud services, AWS Cloud Formation, Azure Resource Manager, or Google Cloud Deployment Manager, and the OASIS Topology and Orchestration Specification for Cloud Applications (TOSCA) are all examples of DSLs. However, developers and operators often regard the specifications written using these DSLs as configuration files rather than models.
The lack of understanding of DSLs and modeling in general leaves a lot of untapped potential. The abstract syntax and semantics of models enable automated reasoning and facilitate automation, which brings me to the following point.
Industry does not exploit models@run-time and automation
The majority of IT professionals I have met use different data structures at design-time and run-time in their systems, which is time-consuming, error-prone, and hinders automation.
Models@run-time provides an elegant solution to this problem. It enables a consistent representation of both design-time and run-time information.
The deceit of the 2000’s, which envisioned a utopian world where software engineers would model 80% of their time and generate 80% of the source code with model-to-text transformation, has never become a reality. Nevertheless, models@run-time has gradually become part of self-adaptive systems, not only in prototypes but also in production.
Several cloud management platforms, such as the TOSCA-based Cloudify, have successfully applied this technique. Developers and operators specify cloud application topologies as models at design-time, while monitoring and adaptation engines programmatically manipulate these models at run-time. This is also the foundation of several AIOps platforms.
Similar to the point above, the lack of understanding of models@run-time and automation is a missed opportunity.
What do we do about it?
Taking my experience into account, here is what I would recommend to academia, standardization organizations, and tool vendors:
Teach modeling and MDE in Bachelor’s degrees
I believe that fundamental concepts such as abstraction, modeling, metamodeling, descriptive and prescriptive models, concrete and abstract syntax, structural and attached constraints, informal and formal semantics, linguistic and ontological typing, conformance, to name a few, should be well understood by all future IT professionals. Therefore, I believe that they should be part of the curriculum of any computer science and software engineering degree, even at the Bachelor’s level.
At the universities where I have studied and worked, modeling is partly introduced in software engineering courses and further elaborated in MDE courses. The problem is that software engineering courses tend to focus on using UML for documentation purposes, while MDE courses typically belong to Master’s programmes only.
When I studied UML during my Bachelor’s degree, one of the elements that baffled me was aggregation, graphically represented by a hollow diamond shape. Its concrete syntax was akin to the one of composition, graphically represented by a filled diamond shape. However, its semantics was indistinguishable from the one of association.
Martin Fowler discussed this issue in his book UML Distilled already 15 years ago: “Aggregation is the part-of relationship. It’s like saying that a car has an engine and wheels as its parts. This sounds good, but the difficult thing is considering what the difference is between aggregation and association. In the pre-UML days, people were usually rather vague on what was aggregation and what was association. Whether vague or not, they were always inconsistent with everyone else. As a result, many modelers think that aggregation is important, although for different reasons. So the UML included aggregation […] but with hardly any semantics. As Jim Rumbaugh says, ‘Think of it as a modeling placebo’.”
In other words, aggregation is meaningless. If you disagree, I challenge you to provide me with definitions of the semantics of aggregation and association, where the distinction between the two is unambiguous.
Unfortunately, OMG keeps the concept in UML and even researchers in the community keep using it. In fact, I have witnessed UML class diagrams containing aggregation at many of the conferences on MDE I attended between 2008 to 2016. Worst of all, I have seen UML class diagrams containing both aggregation and association. Either the authors believed there is a difference between aggregation and association, or they confused aggregation with composition.
Aggregation is just one example. The syntax and semantics of several other concepts in UML are questionable as well. If UML is the best general-purpose modeling language the MDE community can offer, and if not even researchers in the community can use it properly, it is fair to expect that industry will either avoid UML or struggle using it.
Improve frameworks for DSLs and models@run-time
During my years as a researcher, I defined the syntax of two DSLs and contributed to the corresponding models@run-time environments. Eclipse Modeling Framework (EMF) and Connected Data Objects (CDO) have come a long way since their inception. However, even for IT professionals with a PhD on the subject, implementing DSLs and models@run-time environments with today’s frameworks requires a considerable amount of effort.
My co-authors and I have discussed this issue in one of our papers: “Our assessment is that EMF and CDO are well-suited for DSL designers, but less recommendable for developers and even less suited for operators. For these roles, we experienced a steep learning curve and several lacking features that hinder the implementation of models@run-time […]. Moreover, we experienced performance limitations in write-heavy scenarios with an increasing amount of stored elements.”
Perhaps some of these issues have already been addressed by now. Nevertheless, I believe the frameworks for DSLs and models@run-time have to reach a much higher maturity level for industry to use them in production.
Reduce research on model transformation
During my years in academia, I have seen all sorts of research papers on model transformation. My gut feeling is that, for any model transformation need, there is a technique for that in the literature. The question is: to what extent are these techniques adopted? I suspect that the most optimistic researchers in the community expected model transformations to become as widespread as compilers. Well, they have not.
I believe the MDE community should rather concentrate on leveraging existing techniques to address concrete limitations in their frameworks. In 2018, even something as simple as renaming an element of an Ecore model does not lead to the automatic update of the attached OCL constraints.
Do not get me wrong—I am guilty myself, as I have published at least four papers on model transformation that did not achieve the impact I wished. Perhaps researchers in the community should move to other research topics, which brings me to the following point.
Increase research on multi-level modeling
As a researcher, I enjoyed working with multi-level modeling. My intuition is that this technique should be the golden standard for self-adaptive systems.
My co-authors and I have researched in the field of cloud management platforms, and the results are promising: “a smaller language definition in the multi-level case, with some other benefits regarding extensibility, flexibility, and precision.”
Despite these results, I believe the road ahead is long. The frameworks for multi-level modeling need to reach a sufficient technology readiness level for this technique to come out of the academic niche. I wish the multi-level modeling community would be larger.
While I still believe MDE to be the best solution in several problem domains, I am afraid that the approach is stuck in the technological “valley of death.” To bridge this gap, academia and industry should collaborate more closely in the future. What do you think?
- Some researchers in the field would argue that this approach is not an engineering discipline and that it should be called model-driven development (MDD) instead. The Oxford English Dictionary defines engineering as “the branch of science and technology concerned with the design, building, and use of engines, machines, and structures.” Considering that software and data are in fact structures, I am perfectly comfortable with the term model-driven engineering, and I will not distinguish between MDE and MDD. ↩