Executive Summary
Technological and research breakthroughs in the field of Artificial Intelligence (AI) have put it on the brink of revolutionizing almost every industrial sector. It is important to not only consider its applicability in certain domains but also understand when it is not required, in the presence of existing, well-defined computational techniques. Developing AI systems requires access to high-volume, high-quality datasets, and a key complement to this disruption will be the network infrastructure of the Internet-of-Things (IoT), notably 5G deployment. The forestry sector is an example of an industry that can leverage this combination, where data collection and analytics will be crucial to effective operations and supply chain management. Industrial sectors can be vulnerable to control by government or corporate entities monopolizing innovative technology through aggressive patenting, and countries need to be aware how this tandem of technologies can have a deleterious influence on how resources are managed.
Introduction
Given the complexity of its technical foundations and a lack of general understanding outside of the field, the term artificial intelligence (AI) is often misapplied and misunderstood, exaggerated by companies and over-hyped in the media. Subject matter experts across different industries consistently stress the issue of companies mislabeling the capability of software products branded as powered by AI. The field will undoubtedly have a significant impact on economies and broader society and it is important to understand its capabilities and limitations. This article discusses the scope of artificial intelligence, its applications to the forestry sector, and strategic intellectual property trends that can affect not just commercial innovation in the sector, but potentially national economic growth.
Artificial Intelligence Background
The field of artificial intelligence (AI) encompasses a wide range of techniques and frameworks dating back to the 1950s. AI is the theory and development of computer systems able to perform tasks normally requiring human intelligence (e.g. visual perception, speech recognition, translation between languages, etc.). The current state-of-the-art falls under two broad categories, often called General Artificial Intelligence and Narrow Artificial Intelligence.[1] (See Figure 1. Classes of artificial intelligence.)
General artificial intelligence: artificial intelligence is the science and engineering of building machines that behave in ways that until recently were thought to require human intelligence. General Artificial Intelligence—a notional concept that has not yet been fully realized—refers to a system capable of performing the intellectual tasks that a human brain can, such as reasoning, learning, and problem-solving in complex and changing environments.
Narrow artificial intelligence: Arguably the hallmark of narrow AI are Machine Learning (ML) algorithms. ML techniques allow software to automatically improve through experience—a process that enables systems to learn and make predictions based on historical data. Included in this category is the specialized framework known as Deep Learning.
Currently, the term AI is synonymous with narrow AI, because it is able to automate individual, repetitive tasks by learning from patterns found in data. Differentiating between the two, one could say artificial intelligence creates knowledge, while machine learning creates information.
Subclasses of Narrow Artificial Intelligence
Both machine learning and deep learning models rely on a fundamental way of training data to learn relationships, increase the model’s efficiency, and improve its ability to achieve the desired output. Training data refers to a dataset that has been collected, prepared, and provided to the model for the purpose of creating and validating a statistical model prior to real-world deployment. The quality, quantity, structure, and contents of training data are key determinants of how machine learning and deep learning models will function in a real environment.
Machine Learning (ML)
Machine Learning (ML) is a subclass of AI that investigates algorithms with the ability to learn and improve by themselves from experience, without being explicitly programmed.[2] ML algorithms fall into three categories: supervised, unsupervised, and reinforcement. There are three main approaches to implementing the learning function:
- Supervised: this is the task of teaching an ML algorithm by providing a labelled training dataset, determining what input features will correspond with learned functions and providing an example of correct outputs.
- Unsupervised: this type of learning involves providing unlabelled input data from which a machine learning algorithm must structure data, discover patterns, classify inputs, learn functions, and produce outputs without external validation or support. Unsupervised learning can be used to discover unrecognizable or difficult to discern patterns in data.
- Reinforcement: this approach differs from supervised and semi-supervised learning in that “correct” inputs and outputs are never specified to the system. Software agents take actions in an environment as to maximize some notion of cumulative reward: a program is rewarded when it learns a function or achieves the correct output efficiently.
Deep Learning
Deep Learning is a subfield of machine learning that consists of multiple cascading layers–known as an artificial neural network–modelled after the human nervous system (referred to as neural coding). Deep learning architectures enable a computer system to train itself using historical data by recognizing patterns and making probabilistic inferences.
Applicability of AI/Machine Learning
The project management challenges of designing and implementing any system fall within three parameters: cost, complexity, and development time. Software engineering principles predicate the importance of identifying the scope of the computational problem and specifying the required outcomes. Managing stakeholder expectations is a crucial component of any project, and instances where a particular AI strategy is misapplied can quickly produce unrealistic, and unanticipated, outcomes: this often occurs when the definitions of general AI and narrow AI become conflated.
Improperly scoping a project will likely lead to uncertain deliverables, cost overruns, and inefficiencies. General AI solutions are often non-trivial to design, complex to implement, and costly in terms of development time and resources. Further, given the expectations of such a solution, it may not deliver the depth of insight that is expected.
With accurate scoping, appropriate machine learning solutions can be prototyped cost-effectively, given the diversity of open source libraries and computing platforms available. The key challenge—as with any ML problem—is ensuring access to sufficient datasets. Problems suited to machine learning typically include text mining and classification, recommendation, medical diagnosis, human behaviour prediction, financial analysis, image recognition, to name a few.
An important consideration is when not to use AI/ML. Many computational problems can be efficiently solved using conventional optimization algorithms and techniques, not requiring artificial intelligence approaches whatsoever.
Problem Spectrum in the Forestry Industry
Artificial intelligence–specifically machine learning–is being used for smarter forest management. Previously, conducting forest inventory checks was largely unchanged from the early 1900s, requiring labour-intense excursions into forests to create sample plots to extrapolate data. Deep learning approaches can process massive amounts of satellite data to find insights into forest health that would not be detected by conventional means. There exist platforms using ML to analyze tree species, wood volume and tree dimensions for more informed decisions; it is even being used to identify patterns for predicting wildfires in Canada.
Often complex systems consist of several integral components, and at any point along this functional continuum many engineering challenges exist. Popular thinking is to assume AI can be leveraged for any situation, when in fact often its effectiveness may be limited. There are many instances of problems in the forestry sector with computational solutions drawn from other domains:[3],[4]
- solving log bucking problems with market constraints, the stand and forest-wide log bucking problems, using MIP and column generation approaches;[5]
- solving forest-wide log bucking problems using heuristics techniques;[6]
- methods to formulate and solve strategic forest planning models using linear programming;[7]
- methods to solve spatially constrained, tactical harvest scheduling problems using heuristic techniques;[8]
- combining the scheduling tools developed from strategic and tactical scheduling problems with log bucking tools created from the forest wide problems, to solve complicated production scheduling problems.[9]
Data Collection, AI/Machine Leaning and Network Infrastructure
As with most advances in Information and Communications Technologies (ICT), data acquisition and analysis (real-time, near real-time and strategic) will be the driving force behind innovation in the forestry sector; this includes all levels in the supply chain, as well resource management. Sensor-equipped harvesting machines, autonomous hauling/trucking, surveying and planning, satellite imaging, logistical decision support, will all contribute to the massive volumes of information collected. Key to moving this data is the network infrastructure, to support both machine-to-machine, edge computing, and cloud-level data communication and processing.
Any cutting-edge technology in this domain depends greatly on advances in 5G networking, sensor networks and Internet-of-Things (IoT), Smart Agriculture, AI, and data analytics. For example: IoT and AI systems are critical to supporting efficient system operations; 5G will benefit the forestry industry where transportation and automation are key to time-critical and efficient logistical decision-making.[10]
Intellectual Property, Foreign Competition and Resources
Intellectual Property Law
Patents have been referred to—at their fundamental level—as weapons of economic warfare. A patent is a government-issued document that describes an invention and creates a legal situation in which the patented invention can normally only be exploited (made, used, sold, imported) by, or with the authorization of, the patentee. The protection of inventions is limited in time for the grant of a patent (generally 20 years from the filing date of the application).
A technology license agreement is an arrangement that involves an owner or licensor of some technological intellectual property (IP) who accepts compensation to let someone use, change, or resell the IP. The licensor can only license technology if they own exclusive rights to it and both parties must receive a benefit from the partnership; it must be consistent with other agreements; and in order to be successful, the licensor and licensee have to work closely as the technology is incorporated and adapted.
During the last few decades, the number of ICT-related patents has increased dramatically. In association with a fragmentation in IP rights, the increasing number of patents has generated a series of potentially problematic consequences.[11] Strategies such as patent thickets, royalty stacking, increased patent litigation[12],[13] — in particular around standard essential patents – and the difficulties in the definition of fair, reasonable and non-discriminatory licensing terms are highly contentious issues in the industry.[14]
Patents and Economic Leverage
A strategy by Patent Assertion Entities (PAEs)—often referred to as “patent trolls”—is to acquire patents with no intention to ever develop the products or processes, but simply with the goal of collecting licensing fees or coercing companies into out-of-court settlements.[5] Many players in the technology sector have argued that patent systems—in the United States in particular—are having difficulty fulfilling their duty in the current economic environment, and there have been attempts to reform the system in the United States, Japan, and other countries. The weaknesses that are exploited can lead to stifled, penalized inventors, and allow patent holders to throttle innovation by demanding extortionate royalties from companies that rely on their technology in making and selling products. If patents can be tools to stifle innovation then companies closely affiliated with governments can potentially leverage them to slow or interfere, with economic growth in associated sectors in other countries.
For example, the activities of Huawei very much align with the policy of the Chinese government.
Huawei holds 56,492 active patents on telecommunications, networking and other high-tech inventions worldwide, and is a leading patent-holder in the United States, ranking higher than major American companies such as General Electric and AT&T. Huawei is aggressively pursuing royalties and licensing fees in response to recent restrictions on its access to American markets and suppliers; it is heavily asserting patents in technology related to 5G networks, the underlying communication infrastructure of the Internet-of-Things. For example, the company is seeking damages in patent courts, demanding US telecom operator Verizon Communications pay US$1 billion to license the rights to patented technology.
This collateral effect can potentially impact companies developing efficient systems in forestry in North America, affecting the industry itself. Patents assertion in crucial technology areas that affect forestry can ultimately constrain technological development, and even withhold access to potentially valuable technology. Licensing arrangements can be made sufficiently onerous as to make accessing the technology impossible. Alternately, favourable licensing agreements can ease the way for other companies to jump ahead in some technology domains, giving them a notable advantage.
Patents in these domains will potentially impact companies working in the North American markets, as patent owners will possess legal leverage over the use of their ideas. Aggressive patent activity in the technology affecting IoT and 5G infrastructure can have a deleterious effect on forestry resource management, let alone national economic development.
Conclusion
Given its potential as a transformative power, it is important to understand the different degrees of artificial intelligence, and associated capabilities and limitations. It is equally important to not only consider its applicability in certain domains but consider areas where it is not necessary in the presence of existing computational techniques. Successful deployment of AI requires access to high-volume, high-quality datasets, and a key complement to this disruption will be the network infrastructure of the Internet-of-Things, notably 5G technology. One industry that can leverage this combination is the forestry sector, where data collection and analytics will be crucial to commercial success and resource management. It is important to consider aggressive trends in patent activity can have an impact not only on business operations, but economic growth itself.
[1] Russell, S., Norvig, P. Artificial Intelligence: A Modern Approach. 2009. Prentice Hall Press. [2] Murphy, K.P. Machine Learning: A Probabilistic Perspective. 2012. The MIT Press. [3] D’Amours, S., Rönnqvist, M., Weintraub, A. (2008) Using Operational Research for Supply Chain Planning in the Forest Products Industry, INFOR: Information Systems and Operational Research, 46:4, 265-281, DOI: 10.3138/infor.46.4.265. [4] Rönnqvist, M. (2003). Optimization in forestry. Math. Program.. 97. 267-284. 10.1007/s10107-003-0444-0. [5] Arce, J., Carnieri, C., Sanquetta, C., Filho, A.. (2002). A Forest-Level Bucking Optimization System that Considers Customer’s Demand and Transportation Costs. Forest Science. 48. 492-503. [6] Ibid. [7] García, O. (1990). Linear Programming and related approaches in forest planning. New Zealand Journal of Forestry Science. 20. 307-331. [8] Bettinger, P., Bettinger, J. (2006). A New Heuristic Method for Solving Spatially Constrained Forest Planning Problems Based on Mitigation of Infeasibilities Radiating Outward from a Forced Choice. Silva Fennica. 40. 10.14214/sf.477. [9] Chauhan, S., Frayret, J-M., Lebel, L. (2009). Multi-commodity supply network planning in the forest supply chain. European Journal of Operational Research. 196. 688-696. 10.1016/j.ejor.2008.03.024. [10] Pau, L-F. (2019). The Potential of Wireless 5G in Forestry Robotics. 22. 556189. 10.19080/ARTOAJ.2019.22.556189. [11] Heller, M. and Eisenberg, R. (1998). Can Patents Deter Innovation? The Anti-commons in Biomedical Research,Science, 280: 698–701. [12] Hall, B., Helmers, C. , von Graevenitz, G., and Rosazza-Bondibene, C. (2013). A Study of Patent Thickets, Intellectual Property Office. [13] Galasso, A. and Schankerman, M. (2010). Patent Thickets, Courts and the Market for Innovation, RAND Journal of Economics, 41:472–503. [14] Comino, S., Manenti, F., Thumm, N. (2017). The Role of Patents in Information and Communication Technologies (ICTs). A survey of the Literature. 10.13140/RG.2.2.17392.25601. [15] Pohlmann, T. and Opitz, M. (2013). Typology of the Patent Troll Business. R&D Management, 43(2): 103-120.
Peter J. Taillon is a Senior Data Analyst with the Policy Development & Research Group at the Information and Communications Technology Council (ICTC). Peter has extensive experience spanning both the academic and private sector, having worked as a professor, researcher, management consultant and software developer with SMEs and start-ups. His expertise is in artificial intelligence, data science, Internet-of-Things/sensor networks, and Big Data. As an AI strategist and thought-leader, he plays a crucial role with the Natural Resource Canada Canadian Forestry Service Think Tank on Advanced Analytics and Artificial Intelligence, and is a key member of the ICTC Artificial Intelligence Advisory Committee and the Smart Cities Labour Supply Taskforce. Peter holds a PhD in Computer Science from Carleton University, where his research focused on complexity theory and parameterized complexity, with applications to combinatorial graph problems.