In a world awash with data and constantly evolving technologies, Machine Learning (ML) is emerging as one of the most influential pillars of the 21st century, opening new frontiers in information processing and decision making. It is a tool that has shown the potential to redefine entire industries, from healthcare to manufacturing to entertainment.
Remarkable advances have emerged in this field, including generative AI, capable of creating images and videos, even in 3D. One example within 3D image generation is “Gaussian Splatting”, which is capable of generating three-dimensional scenes from 2D video, giving such a realistic result that the colour of the surface of the objects themselves changes depending on the viewer’s own position, and even allows the scene to be viewed from perspectives from which it was never filmed.
In addition, Large Language Models (LLM) such as OpenAI’s GPT-4, mark a milestone in natural language processing; producing texts that, at times, are practically indistinguishable from those written by humans, as well as programming code, and incorporating new functionalities based on other models; such as the ability to see what is happening in an image, being able to understand what you say, to speak or even to generate images (based on DALL-E 3 and generating prompts for us).
In fact, Microsoft is currently developing “Project JARVIS” (yes, like the famous AI in Ironman), where it intends to use chatGPT as a “brain” in charge of selecting the best AI model to solve the problem we have posed and interact with it to deliver what we have asked for (and this is why an LLM, which generates text, is able to see us, listen to us, speak to us and generate images for us).
But what is Machine Learning?
In essence, Machine Learning (ML) is a method by which machines, or in this case computer programs, “learn” from data. It is a branch within artificial intelligence. Unlike traditional programming, where a machine is given a set of explicit instructions, in ML it is given data and allowed to find patterns and make predictions or decisions based on that data.
Think of it like teaching a child to recognise fruit. Instead of explicitly telling him what each fruit looks like, you show him examples repeatedly until he can identify them himself.
And how is this supposed to work?
The typical ML process involves three phases:
- Phase 1 – Training: The model is provided with a dataset (a mathematical structure) and the “response” we expect it to “learn” (e.g., pictures of cats with the word “cat” and pictures of dogs with the word “dog”). During this process, the model “adjusts” its internal parameters to make the best possible predictions.
- Phase 2 – Validation: The performance of the model is evaluated using a different dataset to the training dataset (pictures of dogs and cats that it has not seen before to tell you which animal it is). This ensures that the model not only memorises the data (over-fitting), but that it generalises well to new examples (it is no good if it only identifies those dogs and cats it has seen before).
- Phase 3 – Testing: Once satisfied with the performance, it is used on ‘real world’ data to make predictions or decisions (following on from the example above, create an App that uses the camera to tell me whether the animal in front of me is a dog or a cat).
Okay, and how many different forms of learning are there?
The different types of machine learning are continually evolving, and new types of learning are emerging as time goes on, but at the time of writing I think it is interesting to focus mainly on the following:
Supervised learning is probably the best known ML modality. Here, the model is trained on a dataset that has known (labelled) inputs and outputs. The goal is that, once trained, the model can make accurate predictions about unseen data based on what it has learned.
Example: If we have a dataset of houses with their characteristics and prices, a supervised model could learn to predict the price of a new house based on its characteristics.
In unsupervised learning, the model works with data that is not labelled. The goal here is not to predict a specific output, but to discover structures or patterns in the data.
Example: A clustering algorithm (such as k-means) can identify customer segments in a shopping dataset, without being explicitly told what types of customers exist.
As the name suggests, semi-supervised learning lies between supervised and unsupervised learning. It uses both labelled and unlabelled data for training. This modality can be useful when you have a large amount of unlabelled data and a small amount of labelled data.
Example: Imagine a face recognition system where only some faces are labelled. The model can learn from the labelled faces and apply that knowledge to label the unlabelled faces.
4. For reinforcement
In reinforcement learning, the model (or agent) interacts with an environment and makes decisions. It receives feedback from the environment in the form of rewards or penalties, and aims to maximise the reward over time.
Example: Teaching a robot to walk. The robot receives a reward when it moves forward and a penalty if it falls.
5. By transfer
Transfer learning refers to using knowledge acquired in one task to assist in learning a different but related task. It is especially useful in situations where little data is available for the new task, but you have a model previously trained on a similar task. By transferring prior knowledge, you can speed up and improve the process of training the new model.
Example: Imagine you have a model trained to recognise different types of vehicles (such as cars, trucks and motorbikes). If you now want to train a model to distinguish specific car models, you could use the prior knowledge from the first model to help with this new task, instead of starting from scratch.
And what are the applications of all this?
The applications of ML are as varied as the following:
These are examples of applications, but the possibilities are limited to the imagination of the implementing engineer.
This is all very nice, but... Where do I learn all this?
Nowadays, in Spain, for those who have a higher degree in the branches related to computer science, there is a specialisation course in AI and Big Data, as well as different university master’s degrees. But for those who do not have the necessary degree or do not want to rely on a formal qualification, there are a huge amount of online learning resources, both paid and free. In this article, we are going to focus on what for me is one of the most interesting platforms to start with, which is none other than Kaggle.
What is Kaggle?
Kaggle, founded in 2010, started as a platform for prediction and modelling competitions. It is now owned by Google LLC and has become an online community where data scientists and machine learning enthusiasts share and collaborate on their projects, ideas and discoveries. Kaggle offers users the ability to learn, collaborate and compete in the same space.
Learning Machine Learning with Kaggle
In addition to specific courses that you can find on the platform itself and that allow you to learn by practicing with the same tools that a data scientist would use, one of the highlights of Kaggle is its “Notebooks” section, where users can create and share Jupyter Notebooks. These notebooks are essentially interactive guides and tutorials covering a wide variety of topics, from basic introductions to advanced modelling techniques. It is an excellent resource for those looking for hands-on learning, as they can view the code, modify it and run it in real time.
In addition, Kaggle features “Datasets”, a vast collection of datasets published by users and organisations. These datasets, ranging from financial information to satellite imagery, are essential to practice and learn how real data is handled and analysed in the world of Machine Learning.
Competing in Kaggle
Competitions are undoubtedly the jewel in Kaggle’s crown. Companies, organisations and researchers pose real problems and offer monetary prizes for the most efficient solutions. These competitions vary in complexity and range from sales prediction to disease detection in medical imaging.
Participating in a competition not only provides the opportunity to win prizes, but also to learn in a hands-on environment, improve skills and build a portfolio. Moreover, excelling in these competitions can open doors to career opportunities in the world of data analytics and Machine Learning.
Machine Learning is more than just a data processing technique; it is at the epicentre of the technological revolution of our age. From transforming the way we interact with machines, to redefining entire industries, its impact is immense and still in its early stages. With a wide variety of applications and tools at our disposal, there has never been a better time to enter this field and harness its potential. The future is now and it is driven by Machine Learning.