window.onload = function() { console.log(document.getElementById("input_18_4_1").value); document.getElementById("input_18_4_1").checked = true; } lang="en-US"> Open Source in Artificial Intelligence: Why it Matters to Give Away Your Software for Free | Datafloq
Site icon Datafloq

Open Source in Artificial Intelligence: Why it Matters to Give Away Your Software for Free

I. Introduction

Open sourcing some technologies is kind of counterintuitive at a first look. Why on earth should a company give away what they invested money and people in? I have already written on this trend, but I keep sharpening my thinking around it and this post is a consequence of recent new considerations.

The open source model is quite hard to be reconciled with the traditional SaaS model, especially in the financial sector. However, we are observing many firms providing cutting-edge technologies and algorithms for free. While in some cases there is a specific business motivation behind it (e.g., Google releasing Tensorflow to avoid conflict of interests with their cloud offering), the decision of open sourcing (part of) the technology actually represents an emerging trend.

Tools are nowadays less relevant than people or data and the sharing mindset is a key asset for organizations. Based on this statement, we can divide the considerations on open source in two clusters, which are business considerations and individual considerations.


II. The Business Perspective

From a business perspective, the basic idea is that is really hard to keep the pace with the current technological development and you don’t want your technology to become obsolete in three months time. It is better to give it out for free and set the benchmark rather than keeping it proprietary and discard it after a few months. Furthermore, open sourcing:


III. The Individual Perspective

From the developer’s point of view instead, there are a series of different interesting considerations:


IV. Effects of open source model on AI development

These are some of the reasons why this model is working nowadays, even though there are advocates who claim incumbents to not really be maximally open (Bostrom, 2016) and to only release technology somehow old to them.

My personal view is that companies are getting the best out of spreading their technologies around without paying any costs and any counter effect: they still have unique large datasets, platform, and huge investments capacity that would allow only them to scale up.

Regardless the real reasons behind this strategy, the effect of this business model on the AI development is controversial. According to Bostrom (2016), in the short term, a higher openness could increase the diffusion of AI. Software and knowledge are non-rival goods, and this would enable more people to use, build on top of previous applications and technologies at a low marginal cost, and fix bugs. There would be strong brand implications for companies too.

Software and knowledge are non-rival goods.

In the long term, though, we might observe less incentive to invest in research and development, because of free riding. Hence, there should exist a way to earn monopoly rents from ideas individuals generate. On other side, what stands on the positive side is that open research is implemented to build absorptive capacity (i.e., it is a mean of building skills and keeping up with state of art); it might bring to extra profit from owning complementary assets whose value is increased by new technologies or ideas; and finally, it is going to be fostered by individuals who want to demonstrate their skills, build their reputation, and eventually increase their market value.


Interested in what the future will bring? Download our 2025 Technology Trends eBook for free.

This field is for validation purposes and should be left unchanged.



V. The war of talents

I am adding a final concept which I find intriguing but a bit speculative. It is about the war of talents and the uncanny vicious circle we are observing in the academia-industry relationship.

The problem is indeed twofold:

1) Universities are losing faculty and researchers to the benefit of private companies. This does not allow universities to train the next generation of PhD students which are now driving the AI wave;

2) Things are already moving and many graduate students are deciding to not go for a PhD at all and jump directly into the private tech sector. This means that we might not have at all a new generation of PhDs.

So no students, no teachers. What have we left? Training on the job. I believe open source is helping private companies to gradually be recognized as new knowledge labs’ they already were in my opinion, but traditionally this role has been assigned to universities.

It is an additional way in which big incumbents are taking over universities with a new indirect approach to education which might eventually disrupt the traditional learning we all know.

References

Bostrom, N. (2016). Strategic Implications of Openness in AI Development . Working paper.

This article has been previously published on Medium.

Exit mobile version