Matthieu Vergne's Homepage

Last update: 27/01/2024 18:39:42

I like teaching, I like research, and I like Java, so this blog is a bit of all of that. I mainly write technical posts, which might serve as material for research papers or Java projects, in a style which tries to be accessible to most people. I consider clarity and rigour as top priority requirements, so if you feel like something is hard to understand or not well proven/sourced, don't hesitate to contact me.


I program in Java since 2009, and as a generalist I like to develop libraries which can be reused in many places. Consequently, I have some interest in general structures and methods, leading me to develop several projects (often small, but not always) that you can find on my GitHub account. Now, the best way to make generic stuff is to ensure that it is usable everywhere, which is why I program in Open Source, and more precisely with the CC0 license (as much as possible). Additionally, if I find a project interesting enough, or if I think a lot about some concepts or methods that I use in these projects, I may write posts about them to share the idea in a less technical way than pure code. This is what the following list is about:

Due to my deep interest in genericity, I am also writing a series of posts about Advanced Generic Programming in Java, an activity well suited for library developers but which is different from simply programming with Java generics. To some extents, this series started in October 2014, when I started to help improving the architecture of jMetal, a project about metaheuristics (aka optimization algorithms, like hill climbing, genetic algorithms, and so on). Because I tend to write a lot of details (and because it is interesting, I guess) I received the suggestion of writing a book about it, which I found to be a good idea for better sharing. I am still participating in this project, but now I think it is time to gather all what I said in a proper compilation that other people can reuse for their own projects. These series is still a set of drafts only, so I don't give acces to it now (although hackers may easily find it {^_°}) but I already have some material and I would like to publish some stuff soon. I want to produce a reference on the topic, so feel free to send me an e-mail for any suggestion/feedback that you may have, whether it is about fixes, additions, disagreements, or anything else. Maybe a physical book will be produced out of it, it depends on how it turns out and the feedback I will get.


I mainly us Git to version my code and GitHub to store it remotely. As a perfectionnist, I often spend some time cleaning not only my code, but also my Git commits. It helps reviewing my own code, but it also supports my colleagues in reviewing it. Here are some insights I can share on that matter:


Beside Java-related stuff, this blog is mainly a place where I centralize questions of interest to me, with a style that I think fits well with research (context-question-method-answer). Some might have proved answers, others not, some might be already published elsewhere (I cite them), others not. I will see later if some structure would be helpful, but for now each blog entry is expected to focus on a single question and to link to other entries focusing on related questions, a bit like Wikipedia but with question-driven links:


I am also interested in Artificial Intelligence since a long time, more precisely high school, although I only had the opportunity to work seriously on it way later. Briefly, as I was doing research in a different field, I gathered information and participated in seminars related to A.I. in order to build my expertise on the topic. I was frustrated of how A.I. was going, especially with the presentation of Watson from IBM, which is basically a bunch of tools put together with a great amount of human tuning to make it work. For me, A.I. was not about "making artificial beings in an intelligent way", but about "making artificial beings which are intelligent".

This frustration lead me to focus on the definition of intelligence which, as experts know, is far to reach a broad agreement. This seems to be the reason why people go their own way without caring much. But this lack of definition, for someone as rigorous as me, is just a no-go: I cannot work seriously on a topic without having a clear idea of the goal it is supposed to achieve. This feeling is reinforced daily by seeing how far away are the comprehension of A.I. from the general public and the actual tools made in the field. In order to fix that, I read about several stuff to try to figure out a definition that could bring everyone together. This research lead to my publication Artificial Intelligence and Expertise: the Two Faces of the Same Artificial Performance Coin, which you can find on my papers page.

As a summary, the point is that we know well what expertise is about, and research about intelligence is basically the same thing with one fundamental difference. Where expertise is all about specialization for high performance, intelligence is usually interpreted as a generic ability, a capacity to perform well in any domain. In order to have a reliable definition of intelligence, I took the definition of expertise and replaced its specialized aspect by a generic one. It lead me to interpret intelligence as having or showing domain-generic skill or knowledge because of what you have been taught or what you have experienced.

With this basis, I could define an artificial intelligence as having or showing domain-generic processes or data which have been transferred to or generated by it. The main goal was then to understand well what genericity was about, since it was the main difference with expertise. Indeed, the overwhelming majority of what we produce in the A.I. field are artificial experts: highly specialized tools that perform very well in a single task at once. My thinking about genericity in a formal way is detailed in one of my blog entries. But since I am more familiar with genericity in terms of programming, I decided to give some focus on generic programming, as mentionned in the previous sections. The rough idea is to interpret an artificial intelligence as a self-programming program, which applies generic programming techniques on itself to identify and reuse generic behaviours to perform in many fields.

Currently, I would design an artificial intelligence, or an artificial general intelligence (AGI) to use the current vocabulary, as the following:

The presence of the knowledge graph is a way to support explainability: it allows to use graph mapping techniques to understand the knowledge graph based on its similarity with other, human-made graphs. Machine learning, which is all the current hype, appears to me only as a good way to improve performance by driving (or replacing) familiar searches in the knowledge graph. Their should have processes to complete, clean, and optimize the knowledge graph, whether on the fly or through dedicated sessions (hey, machines can sleep too!). These processes include, among other things, the capacity to factor similar subgraphs, abstract subgraphs (core of the genericity), fix inconsistencies, and so on. I have a lot of resources on all of that, and I would like to gradually share them here. I still have a lot to work on, but if you know relevant works to consider or people to work with, feel free to suggest.