AI Tech Alert: Key Differences Between Transfer Learning and Incremental Learning
BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF) discusses the efficiency of transfer learning and incremental learning for edge AI applications. While transfer learning allows the use of pretrained models, it poses challenges in accuracy and privacy due to cloud dependency. Incremental learning offers a solution as it can operate on-device, accommodating new data without extensive resources. BrainChip's Akida technology supports these learning methods, enabling a range of applications from smart health to AI in autonomous vehicles.
- Incremental learning allows on-device training, enhancing privacy and reducing costs.
- Akida technology supports diverse applications, including smart health and autonomous vehicles.
- Transfer learning has accuracy challenges and requires significant task-specific data.
- Cloud dependency in transfer learning introduces privacy and security risks.
In transfer learning, applicable knowledge established in a previously trained AI model is “imported” and used as the basis of a new model. After taking this shortcut of using a pretrained model, such as an open-source image or NLP dataset, new objects can be added to customize the result for the particular scenario.
The primary downfall of this system is accuracy. Fine-tuning the pretrained model requires large amounts of task-specific data to add new weights or data points. As it requires working with layers in the pretrained model to get to where it has value for creating the new model, it may also require more specialized, machine-learning savvy skills, tools, and service vendors.
When used for edge AI applications, transfer learning involves sending data to the cloud for retraining, incurring privacy and security risks. Once a new model is trained, any time there is new information to learn, the entire training process needs to be repeated. This is a frequent challenge in edge AI, where devices must constantly adapt to changes in the field.
“First and foremost is the issue of there being an available model that you can make work for your application, which is not likely for anything but very basic AI, and then you need enough samples to retrain it properly,” said
Incremental learning is another form that is often used to reduce the resources used to train models because of its efficiency and ability to accommodate new and changed data inputs. An edge device that can perform incremental learning within the device itself, rather than send data to the cloud, can learn continuously.
Incremental or “one-shot” learning can begin with a very small set of samples, and grow its knowledge as more data is absorbed. The ability to evolve based on more data also results in higher accuracy. When retraining is done on the device’s hardware, instead of cloud retraining, the data and application remains private and secure.
“Most of the time, AI projects don’t have large enough data sets in the beginning, and don’t have access to cloud computing for retraining, so they keep paying their vendor whenever anything changes,” said Mankar. “We generally recommend incremental learning because it addresses most of the shortcomings of transfer learning and requires dramatically lower computing costs.”
BrainChip’s Akida brings artificial intelligence to the edge in a way that existing technologies are not capable. The solution is high-performance, small, ultra-low power and enables a wide array of edge capabilities. The Akida (NSoC) and intellectual property, can be used in applications including Smart Home,
About
Additional information is available at https://www.brainchipinc.com
Follow
Follow
View source version on businesswire.com: https://www.businesswire.com/news/home/20210824005366/en/
marks@jprcom.com
Source:
FAQ
What is BrainChip's focus on transfer learning and incremental learning?
How does BrainChip's Akida technology benefit edge AI?
What challenges does transfer learning present according to BrainChip?
What applications are supported by BrainChip's technology?