Recently, Roboflow machine learning engineer Jacob Solawetz sat down with Elisha Odemakinde, an ML researcher and Community Manager at Data Science Nigeria, for a Fireside chat.
During the conversation, Jacob walks through his journey from his career in financial trading to artificial intelligence, a move from natural language processing to computer vision, and how he sees the computer vision space evolving. Jacob walks through prominent use cases we are seeing at Roboflow, tips for building more successful models, and audience Q&A.
Of particular note, a member in the audience asks about how Roboflow assists with transfer learning. Transfer learning is the practice of using what a model has learned from successfully completing one task to assist with that model's performance in a new task. A model learns a given task by fine tuning a set of weights against a loss function. These weights – the model's parameters – can be randomly initiated, or they could start from a known set of values. Transfer learning takes advantage of starting a model's optimization from a set of weights that worked well on another task. The success of this strategy largely depends on the similarity of the tasks.
Roboflow assists with transfer learning in that (1) your datasets are versioned, so as you build models against a given version, you have an archive of at what point in your model building process a known result was achieved (2) the Roboflow Model Library allows users to load pretrained weights from either their own prior tasks or a publicly suitable task like the COCO dataset (note: this is a video link).
We welcome questions like these! Feel free to get in touch.
Want to be notified of events like this in the future? Subscribe to our newsletter.