Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Journal 1, 2009
This paper is about the number 1. The number 2 is left for future work.
Recommended citation: Your Name, You. (2009). "Paper Title Number 1." Journal 1. 1(1). http://academicpages.github.io/files/paper1.pdf
Published in Journal 1, 2010
This paper is about the number 2. The number 3 is left for future work.
Recommended citation: Your Name, You. (2010). "Paper Title Number 2." Journal 1. 1(2). http://academicpages.github.io/files/paper2.pdf
Published in Journal 1, 2015
This paper is about the number 3. The number 4 is left for future work.
Recommended citation: Your Name, You. (2015). "Paper Title Number 3." Journal 1. 1(3). http://academicpages.github.io/files/paper3.pdf
Published:
Invited talk at the Harbin Institute of Technology (HIT) (Shenzhen, China): Ensemble Learning for Evolving Data Streams.
Published:
An overview of StreamDM, a real-time analytics open source software library built on top of Spark Streaming, developed at Huawei’s Noah’s Ark Lab and Télécom ParisTech.
Published:
The main goal of this tutorial is to introduce attendees to big data stream mining theory and practice. We will use the StreamDM framework to illustrate concepts and also to demonstrate how data stream mining pipelines can be deployed using StreamDM.
Published:
The volume of data is rapidly increasing due to the development of the technology of information and communication. This data comes mostly in the form of streams. Learning from this ever-growing amount of data requires flexible learning models that self-adapt over time. In addition, these models must take into account many constraints: (pseudo) real-time processing, high-velocity, and dynamic multi-form change such as concept drift and novelty. The tutorial was combined with a workshop on the same topic.
Published:
The main difference between batch machine learning implementations in Spark (MLlib and Spark ML) and StreamDM is that the latter focus on algorithms that can be trained and adapted incrementally. This can be a huge advantage in some domains as it enables automatically updating the learning models. StreamDM is currently under development by Huawei Noah’s Ark Lab and Télécom ParisTech.
Published:
We present how to build random forest models from streaming data. This is achieved by training, predicting and adapting the model in real-time with evolving data streams. The implementation is on the open source library StreamDM, built on top of Apache Spark.
Published:
In many domains, data is generated at a fast pace. A clear example is the Internet of Things (IoT) applications, where connected sensors yield large amount of data in short periods. To build predictive models from this data, you need to either settle for traditional offline learning or attempt to learn from the data incrementally. A significant setback with the offline learning approach is that it’s slow to react to changes in the domain, and these changes can have a catastrophic impact on the model predictive performance, since the patterns in which the model was trained on are no longer valid. An online approach where the model is trained incrementally can potentially fix this; however, the untold story is that the existing challenges for offline learning are still present (and are even maximized) when processing the data online. These challenges include, but are not limited to, raw data preprocessing, efficient incremental updates to models, algorithms to detect changes and react to them, and dealing with lots of unlabeled and delayed-labeled data.
Published:
Presented the paper “Streaming Random Patches for Evolving Data Stream Classification.”
Published:
Lecture at the IOT Stream Data Mining course (Paris, France) as part of the Data and Knowledge 2nd year Master Program of Université Paris Saclay 2019-2020.
Published:
In this tutorial we introduced attendees to data stream mining procedures and examples of big data stream mining applications with examples using the scikit-multiflow. The tutorial was scheduled for IJCAI 2020 but ended up being presented early 2021 due to the pandemic. Website: https://streamlearningtutorial2020.netlify.app/
Published:
Presentation at the first Artificial Intelligence Researchers Association of New Zealand meetup. Conference website My presentation video
Published:
Guest lecture for the Latin America Masterclasses organised by the New Zealand Education (ENZ)
Published:
Presentation at the machine learning seminar series at the University of Waikato
Published:
Invited talk at the Joint Machine Learning Seminar hosted collaboratively by Cardiff University and the University of Waikato
Published:
Invited talk at the Itau data science meeting.
Published:
Talk at the International WEKA user conference introducing MOA Program website My presentation video
Published:
Keynote at the IncrLearn workshop in ICDM 2021. Program: https://incrlearn.sciencesconf.org/resource/page/id/7 Slides: https://incrlearn.sciencesconf.org/data/Gomes_IncrLearn21.pdf
Published:
Guest Lecture entitled “Machine Learning for Streaming Data” for undergrad and graduate students at Michigan Technological University (USA)
Published:
Presentation at the Artificial Intelligence Researchers Association of New Zealand meetup. Conference website My presentation video
Published:
Guest Lecture concerning Concept Drift Detection and Applications for data scientists from Stats NZ
Published:
Guest Lecture concerning Machine Learning for Streaming Data for graduate students at China University of Mining and Technology
Published:
Guest Lecture concerning SSL and Delayed Labelling for undergraduate students at the University of Waikato (Hamilton, NZ)
Published:
Machine learning for data streams (MLDS) has been a significant research area since the late 90s, with increasing adoption in industry over the past few years. Despite commendable efforts in opensource libraries, a gap persists between pioneering research and accessible tools, presenting challenges for practitioners, including experienced data scientists, in implementing and evaluating methods in this complex domain. Our tutorial addresses this gap with a dual focus. We discuss advanced research topics such as partially delayed labeled streams while providing practical demonstrations of their implementation and assessment using Python. By catering to both researchers and practitioners, this tutorial aims to empower users in designing, conducting experiments, and extending existing methodologies.
Published:
The field of Machine Learning for Data Streams has seen growing interest and adoption in recent years, particularly in industry. Despite this progress, there remains a noticeable gap between cutting-edge research and the practical tools available, making it difficult for even experienced data scientists to apply and evaluate these techniques in real-world scenarios. Our tutorial is designed to address this issue by focusing on two key areas. We explore advanced topics, such as handling streams with partially delayed labels, while providing practical, Python-based demonstrations for implementation and evaluation. This dual approach aims to empower both researchers and practitioners to develop new experiments and extend current methodologies.
Published:
Our tutorial aims to bridge this gap with a dual focus. We discuss important research topics, such as partially delayed labeled streams, while providing practical demonstrations of their implementation and assessment using CapyMOA, an open-source library that provides efficient algorithm implementations through a high-level Python API. This tutorial also included exercises and many examples. Link to publication.
Undergraduate and MSc course, University of Waikato, School of Computing & Mathematical Sciences, 2020
This paper is an introduction to stream data mining. Data streams are everywhere, from F1 racing over electricity networks to news feeds. Data stream mining relies on incremental algorithms that process streams under strict resource limitations. This paper focuses on, as well as extends the methods implemented in MOA (Java) and scikit-multiflow (Python), two open-source stream mining software suites currently being developed by the Machine Learning group at the University of Waikato. More information.
Undergraduate, Victoria University of Wellington, ECS, 2020
The lectures cover following main topics: search techniques, machine learning including basic learning concepts and algorithms, neural networks and evolutionary learning, reasoning under uncertainty, planning and scheduling, knowledge based systems and AI Philosophy. The course includes a substantial amount of programming. The course will cover both science and engineering applications. More information.