08 December 2023

Les enjeux de l’informatique : comment les avancées technologiques ont révolutionné notre monde

Aujourd’hui, l’informatique est omniprésente dans notre quotidien, révolutionnant de nombreux aspects de notre vie. Dans cet article, nous allons explorer les enjeux de l’informatique et comment les avancées technologiques ont profondément transformé notre monde.

Une révolution numérique sans précédent

L’avènement de l’informatique a marqué le début d’une révolution numérique sans précédent. Les ordinateurs, les smartphones et les autres dispositifs technologiques ont progressivement envahi notre quotidien, facilitant nos tâches, nous reliant aux autres et nous donnant accès à une quantité infinie d’informations.

Aujourd’hui, il est difficile d’imaginer un monde sans Internet et sans les services numériques qui nous simplifient la vie. L’informatique a changé la manière dont nous communiquons, dont nous travaillons et dont nous accédons à l’information. Nous avons constamment besoin de rester connectés et d’être efficaces dans notre utilisation de la technologie.

Les bénéfices de l’informatique

L’informatique a apporté de nombreux bénéfices à notre société. Elle a facilité la recherche d’informations en mettant à notre disposition des moteurs de recherche performants. Elle a également révolutionné le monde des affaires, en permettant l’automatisation des tâches et en favorisant la numérisation des processus.

De plus, l’informatique a réduit les distances en nous permettant de communiquer instantanément avec des personnes du monde entier grâce aux réseaux sociaux et aux applications de messagerie. Elle a également rendu possible le commerce en ligne, offrant une nouvelle forme de liberté dans nos habitudes de consommation.

Les défis et les recommandations

Toutefois, l’informatique a également apporté son lot de défis. La question de la protection des données personnelles est devenue cruciale. Les cyberattaques sont en constante augmentation et menacent notre vie privée ainsi que notre sécurité en ligne.

Pour faire face à ces défis, il est essentiel de prendre des mesures de sécurité adéquates. Nous vous recommandons de protéger vos informations sensibles en utilisant des mots de passe forts, en mettant à jour régulièrement vos logiciels et en évitant de télécharger des fichiers provenant de sources douteuses.

En outre, il est important de rester informé des dernières avancées technologiques et de se former à leur utilisation. De nombreux centres de formation proposent des cours d’informatique pour tous les niveaux, afin de vous permettre de tirer le meilleur parti de ces technologies.

Conclusion

L’informatique a définitivement révolutionné notre monde. Elle a amélioré notre quotidien, facilité nos échanges et ouvert une multitude de possibilités. Toutefois, il est important de rester vigilant et de continuer à s’informer afin de faire face aux défis liés à cette révolution technologique.

The post <h1>Les enjeux de l’informatique : comment les avancées technologiques ont révolutionné notre monde</h1> appeared first on gnomelibre.fr.

07 December 2023

L’impact de l’informatique sur notre quotidien : réalités et perspectives

De nos jours, l’informatique est omniprésente et a profondément influencé notre quotidien. Que ce soit dans notre vie professionnelle, personnelle ou sociale, cette discipline a révolutionné notre manière de vivre et a ouvert de nombreuses perspectives. Dans cet article, nous allons explorer les différentes réalités de l’impact de l’informatique sur notre quotidien et envisager les perspectives qu’elle offre.

Une révolution numérique

L’informatique a révolutionné de nombreux aspects de notre vie quotidienne. Grâce aux ordinateurs, aux smartphones et aux tablettes, nous pouvons désormais accéder à une multitude de services, d’informations et de divertissements en ligne. Que ce soit pour travailler, communiquer avec nos proches, effectuer nos achats ou simplement nous divertir, l’informatique a rendu tout cela possible.

Des avantages mais aussi des défis

Cette révolution numérique a apporté de nombreux avantages, mais elle présente également des défis. D’un côté, elle a facilité l’accès aux informations, accéléré les processus et amélioré la productivité. De l’autre, elle soulève des questions relatives à la protection des données, à la sécurité en ligne et à la dépendance aux technologies.

Par exemple, l’utilisation d’algorithmes dans les moteurs de recherche peut faciliter la recherche d’informations, mais soulève également des questions sur la neutralité des résultats et la manipulation de l’information. De même, l’utilisation des réseaux sociaux offre une plateforme de communication inédite, mais génère également des problématiques liées à la vie privée et à la désinformation.

Des applications concrètes

L’impact de l’informatique sur notre quotidien trouve des applications concrètes dans de nombreux domaines. Par exemple, dans le domaine de la santé, l’informatique permet d’améliorer le suivi des patients, de faciliter le diagnostic et de développer de nouvelles thérapies. Dans le domaine de l’éducation, elle permet l’accès à des ressources pédagogiques en ligne et favorise l’apprentissage à distance. Dans le domaine de la mobilité, elle a donné naissance aux voitures autonomes et aux services de transport à la demande.

Les perspectives futures

Les perspectives offertes par l’informatique sont prometteuses. L’intelligence artificielle, par exemple, offre des possibilités de développement encore inimaginables il y a quelques années. Elle ouvre la voie à de nouvelles innovations dans de nombreux secteurs, tels que la médecine, l’industrie, les transports et bien d’autres.

Les développements en matière de réalité virtuelle et de réalité augmentée promettent également de transformer notre manière d’interagir avec notre environnement. Ces technologies offrent des possibilités nouvelles en matière de communication, d’apprentissage et de divertissement.

Des recommandations

Au regard de ces réalités et perspectives, il est essentiel de prendre en compte certains éléments pour bien intégrer l’informatique dans notre quotidien. Il est important de rester vigilant quant à l’utilisation de nos données personnelles et de renforcer notre sécurité en ligne. Parallèlement, il est recommandé de se former aux outils et technologies numériques afin de tirer pleinement parti de leurs avantages.

Au sein de notre site internet, nous nous engageons à vous fournir des informations pertinentes et actualisées sur l’impact de l’informatique sur notre quotidien. Nous vous invitons à consulter nos articles et à nous faire part de vos questions et commentaires.

En conclusion, l’informatique a indéniablement transformé notre quotidien et continue de le bouleverser. Ses réalités actuelles et ses perspectives futures offrent de nombreuses opportunités, mais requièrent également une réflexion sur nos pratiques et une adaptation à cette révolution numérique.

The post <h1>L’impact de l’informatique sur notre quotidien : réalités et perspectives</h1> appeared first on gnomelibre.fr.

20 October 2023

Switch antenne, version avec jack

L’épisode était il y a plus de deux ans (Switch antenne), je commence donc par un rapide résumé : à la radio on a deux studios et un dispositif pour déterminer quel studio va envoyer son signal pour la diffusion, le studio 1, le studio 2, ou aucun, la diffusion automatique prenant alors la place. C’est géré via un arduino, avec de la petite électronique pour deux parties distinctes : les boutons et leds permettant de faire la sélection et d’afficher celle-ci, et une partie « routant » le signal audio choisi. L’aventure il y a deux ans concernait les boutons, j’avais alors mis en place une interface web pour permettre la sélection, et on avait ensuite réussi à remettre en marche les boutons. Cette fois-ci, cette semaine, on a eu des problèmes avec l’autre partie, celle concernant le signal audio.

Ça commence dimanche mais comme on communique via un carnet papier, ça passe inaperçu et lundi matin je ne découvre ces messages qu’après avoir fait deux heures de l’émission matinale dans le vide (et heureusement qu’on a des auditeur·ices pour nous faire signe).

jour qui se lève, avec le soleil rougeoyant l'horizon

16 octobre 2023, le jour se lève pendant qu’on n’est pas à l’antenne
(j’aurais mieux fait de vérifier ce qui passait à l’antenne, plutôt que prendre cette photo)

C’est pratique les congés je peux rester à la radio pour essayer de comprendre le soucis mais sans vraiment de succès. Finalement en réuploadant le programme sur l’arduino ça a remis les choses dans l’ordre, ce que je n’explique pas. Mardi matin, ça joue un tour un peu différent, le studio passe bien à l’antenne mais cinq minutes après avoir fait la bascule. Mercredi matin à nouveau autre chose, ça ne passe pas, mais en réessayant une heure plus tard ça passe.

Tout ça laisse imaginer qu’un jour ça ne marchera plus et qu’on sera bien en peine de refaire fonctionner l’affaire, je reprends donc le travail d’il y a deux ans pour attaquer la gestion du signal audio.

Le plan est simple, on pourrait brancher une deuxième carte son sur l’ordinateur de diffusion, avec 4 canaux pour y brancher les 2 studios, sur un changement de sélection on pourrait alors utiliser jack pour faire et défaire des connexions.

Pour aller au plus vite, dans l’épisode précédent il y avait création de websockets pour l’affichage sur une page web de la sélection, ça peut être réutilisé pour être notifié des changements. Ça fait une cascade assez baroque, boutons physiques → Arduino → paquet UDP → "proxy" → websocket, mais pourquoi pas.

Pour le code, l’étape 1 c’est gérer les notifications websocket, c’est très simple avec le module aiohttp,

async with aiohttp.ClientSession() as session:
    async with session.ws_connect(app_settings.SWITCH_WS_URL) as ws:
        async for msg in ws:
            if msg.type == aiohttp.WSMsgType.TEXT:
                try:
                    msg = json.loads(msg.data)
                except ValueError:
                    continue
                if msg.get('active') != currently_active:
                    currently_active = msg.get('active')
                    self.update_jack_connections(currently_active)

Pour la partie jack, dans une première version je fais avec les commandes jack_connect et jack_disconnect, en très simplifié :

def update_jack_connections(self, active):
    dports = app_settings.SWITCH_OUT_PORTS
    # ex: ('alsa_out:playback_1', 'alsa_out:playback_2')

    for port_id, port_names in app_settings.SWITCH_IN_PORTS.items():
        # ex: {
        #  0: ('netjack_soma:capture_1', 'netjack_soma:capture_2'),
        #  1: ('alsa_in:capture_1', 'alsa_in:capture_2'),
        #  2: ('alsa_in:capture_3', 'alsa_in:capture_4'),
        # }

        if port_id == active:
            cmd = 'jack_connect'
        else:
            cmd = 'jack_disconnect'
        subprocess.run([cmd, port_names[0], dports[0]])
        subprocess.run([cmd, port_names[1], dports[1]])

Mais ces commandes ne sont plus disponibles avec jack, ils ont été déplacés dans un module jack-example-tools, qui n’est pas disponible dans Debian. En réalité j’ai conservé l’ancienne version de jack parce que je trouve ces outils très pratiques mais ici je décide quand même de ne pas en dépendre, je réécris donc, en plus long :

def update_jack_connections(self, active):
    dports = app_settings.SWITCH_OUT_PORTS
    with jack.Client('switch-jack') as client:
        known_ports = {x.name for x in client.get_ports(is_audio=True)}
        for port_id, port_names in app_settings.SWITCH_IN_PORTS.items():
            if port_id == active:
                self.jack_connect(client, port_names[0], dports[0])
                self.jack_connect(client, port_names[1], dports[1])
            else:
                self.jack_disconnect(client, port_names[0], dports[0])
                self.jack_disconnect(client, port_names[1], dports[1])

def jack_connect(self, client, in_port, out_port):
    connections = [x.name for x in client.get_all_connections(in_port)]
    if out_port not in connections:
        client.connect(in_port, out_port)

def jack_disconnect(self, client, in_port, out_port):
    connections = [x.name for x in client.get_all_connections(in_port)]
    if out_port in connections:
        client.disconnect(in_port, out_port)

(la version réelle a du logging et de la gestion d’erreur en plus).

Au milieu de ces épisodes de code, on sort une carte son de l’étagère, on y branche les studios, on branche la carte son à l’ordi, on fait la liaison entre celle-ci et le jack qui tournait déjà sur la carte son existante, alsa_in -d hw:CARD=US4x4,DEV=0 -c 4 et alsa_out -d hw:CARD=US4x4,DEV=0 -c 4. (alsa_in et alsa_out sont aussi désormais dans le module jack-example-tools).

On ne branche pas la sortie de la carte son en réel vers l’antenne, on décide de garder ça en test pour le moment. (surtout que la carte son qu’on utilise nous a déjà joué des tours, en perdant le signal après quelques jours, on passera sans doute par un autre modèle).

Ça tourne, et avec les logs je peux vérifier ce matin que ça a fonctionné :

2023-10-19 20:47:45,006 (I) setting source: 1
2023-10-19 20:47:45,008 (I) disconnecting netjack_soma:capture_1 and alsa_out:playback_1
2023-10-19 20:47:45,009 (I) disconnecting netjack_soma:capture_2 and alsa_out:playback_2
2023-10-19 20:47:45,009 (I) connecting alsa_in:capture_1 and alsa_out:playback_1
2023-10-19 20:47:45,010 (I) connecting alsa_in:capture_2 and alsa_out:playback_2
2023-10-19 22:43:01,594 (I) setting source: 0
2023-10-19 22:43:01,596 (I) connecting netjack_soma:capture_1 and alsa_out:playback_1
2023-10-19 22:43:01,596 (I) connecting netjack_soma:capture_2 and alsa_out:playback_2
2023-10-19 22:43:01,597 (I) disconnecting alsa_in:capture_1 and alsa_out:playback_1
2023-10-19 22:43:01,597 (I) disconnecting alsa_in:capture_2 and alsa_out:playback_2
2023-10-20 06:59:19,629 (I) setting source: 2
2023-10-20 06:59:19,652 (I) disconnecting netjack_soma:capture_1 and alsa_out:playback_1
2023-10-20 06:59:19,653 (I) disconnecting netjack_soma:capture_2 and alsa_out:playback_2
2023-10-20 06:59:19,653 (I) connecting alsa_in:capture_3 and alsa_out:playback_1
2023-10-20 06:59:19,654 (I) connecting alsa_in:capture_4 and alsa_out:playback_2
2023-10-20 09:02:35,139 (I) setting source: 0
2023-10-20 09:02:35,141 (I) connecting netjack_soma:capture_1 and alsa_out:playback_1
2023-10-20 09:02:35,142 (I) connecting netjack_soma:capture_2 and alsa_out:playback_2
2023-10-20 09:02:35,143 (I) disconnecting alsa_in:capture_3 and alsa_out:playback_1
2023-10-20 09:02:35,143 (I) disconnecting alsa_in:capture_4 and alsa_out:playback_2

Il manque encore des bouts, en pratique quand ça va démarrer la websocket ne sera pas encore disponible, et si jamais il y a une interruption il n’y a rien pour reprendre, j’ajoute ces parties ce matin, et me voilà enfin avec quelque chose qui me semble pouvoir tenir la route. (code dans le dépôt). Et ça fait un dispositif qui devient indépendant de l’Arduino, qu’il sera possible de plus facilement mettre en place dans d’autres radios.

 

05 October 2023

La journée de 8 heures et les mails pro

« On s'est battu pour la gagner on se battra pour la garder », comme un slogan de manif repris en refrain par les Vulves assassines, mais ici pour la journée de travail, et il y a plus d’un siècle.

J’ai toujours été mauvais à ça, j’ai mes mails perso et mails boulot qui arrivent au même endroit alors que d’expérience le seul truc qui marche vraiment pour me déconnecter est de totalement couper la réception des mails du serveur du boulot. Avec le temps j’étais passé d’une exécution directe de fetchmail à un contrôle via un script, et même si c’était à la base surtout pour avoir un affichage de contrôle, la structure permettait d’assez facilement éliminer un serveur, il suffisait en début de congés de commenter la ligne boulot,

tasks = (
#    asyncio.create_task(fetchmail(server='mail.boulot')),
    asyncio.create_task(fetchmail(server='mx.0d.be')),
)
await asyncio.gather(*tasks, return_exceptions=False)

Mais je n’avais pas la discipline de faire ça les weekends et autres jours pas travaillés, et encore moins de faire ça soir et matin. Il était temps de faire quelque chose là-dessus, et se donner des horaires, qui s’ils ne sont pas encore la journée de 8 heures, s’éloignent quand même nettement de la disponibilité permanente. Des horaires, donc, d’une manière très basique, des jours avec heure de début et heure de fin.

servers = {
    'mx.0d.be': {
    },
    'mail.boulot': {
        'schedule': {
            0: [(8, 0), (20, 0)],  # Monday, from 8 to 20
            1: [(8, 0), (20, 0)],
            2: [],  # Wednesday, full off
            3: [],
            4: [(8, 0), (20, 0)],
            5: [],
            6: []
        }
    }
}

Et le code sans aucune finesse pour interpréter ça,

def should_run(server):
    schedule = servers.get(server).get('schedule')
    if not schedule:
        return True  # always
    now = datetime.datetime.now()
    day_schedule = schedule[now.weekday()]
    if not day_schedule:
        return False
    if (now.hour, now.minute) < day_schedule[0]:
        return False
    if (now.hour, now.minute) >= day_schedule[1]:
        return False
    return True

Resterait ensuite de manière régulière à vérifier l’heure mais il n’y a pas non plus à être très fin ici, je passe juste par l’ajout d’une tâche qui toutes les dix minutes provoquera réévaluation des horaires, et redémarrage des processus fetchmail,

async def period():
    await asyncio.sleep(600)  # 10 minutes
    raise Restart()

Il y aurait dans une des nombreuses heures ainsi libérées à nettoyer le reste du script, pour en retirer des éléments qui auraient davantage leur place dans un fichier de configuration, mais comme ça n’est pas fait, je n’ai exceptionnellement pas de lien vers le dépôt avec le code.

14 August 2023

New responsibilities

As part of the same process outlined in Matthias Clasen's "LibreOffice packages" email, my management chain has made the decision to stop all upstream and downstream work on desktop Bluetooth, multimedia applications (namely totem, rhythmbox and sound-juicer) and libfprint/fprintd. The rest of my upstream and downstream work will be reassigned depending on Red Hat's own priorities (see below), as I am transferred to another team that deals with one of a list of Red Hat’s priority projects.

I'm very disappointed, because those particular projects were already starved for resources: I spent less than 10% of my work time on them in the past year, with other projects and responsibilities taking most of my time.

This means that, in the medium-term at least, all those GNOME projects will go without a maintainer, reviewer, or triager:
- gnome-bluetooth (including Settings panel and gnome-shell integration)
- totem, totem-pl-parser, gom
- libgnome-volume-control
- libgudev
- geocode-glib
- gvfs AFC backend

Those freedesktop projects will be archived until further notice:
- power-profiles-daemon
- switcheroo-control
- iio-sensor-proxy
- low-memory-monitor

I will not be available for reviewing libfprint/fprintd, upower, grilo/grilo-plugins, gnome-desktop thumbnailer sandboxing patches, or any work related to XDG specifications.

Kernel work, reviews and maintenance, including recent work on SteelSeries headset and Logitech devices kernel drivers, USB revoke for Flatpak Portal support, or core USB is suspended until further notice.

All my Fedora packages were orphaned about a month and a half ago, it's likely that there are still some that are orphaned, if there are takers. RHEL packages were unassigned about 3 weeks ago, they've been reassigned since then, so I cannot point to the new maintainer(s).

If you are a partner, or a customer, I would recommend that you get in touch with your Red Hat contacts to figure out what the plan is going forward for the projects you might be involved with.

If you are a colleague that will take on all or part of the 90% of the work that's not being stopped, or a community member that was relying on my work to further advance your own projects, get in touch, I'll do my best to accommodate your queries, time permitting.

I'll try to make sure to update this post, or create a new one if and when any of the above changes.

17 August 2022

Speeding up the kernel testing loop

When I create kernel contributions, I usually rely on a specific hardware, which makes using a system on which I need to deploy kernels too complicated or time-consuming to be worth it. Yes, I'm an idiot that hacks the kernel directly on their main machine, though in my defense, I usually just need to compile drivers rather than full kernels.

But sometimes I work on a part of the kernel that can't be easily swapped out, like the USB sub-system. In which case I need to test out full kernels.

I usually prefer compiling full kernels as RPMs, on my Fedora system as it makes it easier to remove old test versions and clearly tag more information in the changelog or version numbers if I need to.

Step one, build as non-root

First, if you haven't already done so, create an ~/.rpmmacros file (I know...), and add a few lines so you don't need to be root, or write stuff in /usr to create RPMs.

$ cat ~/.rpmmacros
%_topdir        /home/hadess/Projects/packages
%_tmppath        %{_topdir}/tmp

Easy enough. Now we can use fedpkg or rpmbuild to create RPMs. Don't forget to run those under “powerprofilesctl launch” to speed things up a bit.

Step two, build less

We're hacking the kernel, so let's try and build from upstream. Instead of the aforementioned fedpkg, we'll use “make binrpm-pkg” in the upstream kernel, which builds the kernel locally, as it normally would, and then packages just the binaries into an RPM. This means that you can't really redistribute the results of this command, but it's fine for our use.

 If you choose to build a source RPM using “make rpm-pkg”, know that this one will build the kernel inside rpmbuild, this will be important later.

 Now that we're building from the kernel sources, that's our time to activate the cheat code. Run “make localmodconfig”. It will generate a .config file containing just the currently loaded modules. Don't forget to modify it to include your new driver, or driver for a device you'll need for testing.

Step three, build faster

If running “make rpm-pkg” is the same as running “make ; make modules” and then packaging up the results, does that mean that the “%{?_smp_mflags}” RPM macro is ignored, I make you ask rhetorically. The answer is yes. “make -j16 rpm-pkg”. Boom. Faster.

Step four, build fasterer

As we're building in the kernel tree locally before creating a binary package, already compiled modules and binaries are kept, and shouldn't need to be recompiled. This last trick can however be used to speed up compilation significantly if you use multiple kernel trees, or need to clean the build tree for whatever reason. In my tests, it made things slightly slower for a single tree compilation.

$ sudo dnf install -y ccache
$ make CC="ccache gcc" -j16 binrpm-pkg

Easy.

And if you want to speed up the rpm-pkg build:

$ cat ~/.rpmmacros
[...]
%__cc            ccache gcc
%__cxx            ccache g++

More information is available in Speeding Up Linux Kernel Builds With Ccache.

Step five, package faster

Now, if you've implemented all this, you'll see that the compilation still stops for a significant amount of time just before writing “Wrote kernel...rpm”. A quick look at top will show a single CPU core pegged to 100% CPU. It's rpmbuild compressing the package that you will just install and forget about.

$ cat ~/.rpmmacros
[...]
%_binary_payload    w2T16.xzdio

More information is available in Accelerating Ceph RPM Packaging: Using Multithreaded Compression.

TL;DR and further work

All those changes sped up the kernel compilation part of my development from around 20 minutes to less than 2 minutes on my desktop machine.

$ cat ~/.rpmmacros
%_topdir        /home/hadess/Projects/packages
%_tmppath        %{_topdir}/tmp
%__cc            ccache gcc
%__cxx            ccache g++
%_binary_payload    w2T16.xzdio


$ powerprofilesctl launch make CC="ccache gcc" -j16 binrpm-pkg

I believe there's still significant speed ups that could be done, in the kernel, by parallelising some of the symbols manipulation, caching the BTF parsing for modules, switching the single-threaded vmlinux bzip2 compression, and not generating a headers RPM (note: tested this last one, saves 4 seconds :)

 

The results of my tests. YMMV, etc.

Command Time spent Notes
koji build --scratch --arch-override=x86_64 f36 kernel.src.rpm 129 minutes It's usually quicker, but that day must have been particularly busy
fedpkg local 70 minutes No rpmmacros changes except setting the workdir in $HOME
powerprofilesctl launch fedpkg local 25 minutes
localmodconfig / bin-rpmpkg 19 minutes Defaults to "-j2"
localmodconfig -j16 / bin-rpmpkg 1:48 minutes
powerprofilesctl launch localmodconfig ccache -j16 / bin-rpmpkg 7 minutes Cold cache
powerprofilesctl launch localmodconfig ccache -j16 / bin-rpmpkg 1:45 minutes Hot cache
powerprofilesctl launch localmodconfig xzdio -j16 / bin-rpmpkg 1:20 minutes

01 February 2020

« Gagner la guerre » de Jean-Philippe Jaworski

Grand amateur de fantasy, la plupart de mes lectures sont généralement en anglais. « Gagner la guerre » de Jean-Philippe Jaworski trônait dans ma pile (virtuelle) de lecture depuis un moment, auréolé par ses excellentes critiques et la perspective de lire une œuvre de fantasy en français. Il y était accompagné de « Janua Vera », le recueil de nouvelles dans le même univers précédent le roman.

Jaworski n'a pas usurpé sa réputation d'excellent auteur, son texte est incroyablement bien écrit. Le style et le rythme sont remarquables et je me suis plusieurs fois surpris à relire certains passages rien que pour le plaisir d'en profiter une seconde fois. Le vocabulaire est également extrêmement riche; c'est bien simple, j'ai dû plus souvent utiliser la fonction dictionnaire de ma liseuse que lors de mes lectures en anglais !

L'univers, si bien introduit dans « Janua Vera », est toujours aussi cohérent et plaisant à découvrir. Ce mélange de pseudo-réalisme historique, la ville principale savant mélange de Florence et de Rome antique, saupoudré de magie et de fantasy fonctionne à merveille.

Toutes ces belles qualités sont malheureusement ternies par un propos extrêmement viriliste et des personnages féminins quasi inexistants. On retombe de plain pieds dans les critiques et clichés souvent associés au genre et c'est bien dommage. Cela m'a d'autant plus marqué après les nombreuses œuvres de Brandon Sanderson et Robin Hobb que j'ai lues récemment et qui ont démontré avec brio qu'on pouvait écrire de l'excellente fantasy avec des personnages féminins forts et intéressants. J'ai un peu le même arrière-goût qu'après la lecture de « La Horde du Contrevent » qui tombait dans les mêmes travers, bien que de façon moins marquée. Cela donne l'impression que la fantasy française est restée bloquée au siècle passé et n'arrive pas à sortir des stéréotypes de genre qui ont trop longtemps collés à ce style littéraire.

Du coup si vous avez des recommandations d'auteurs·rices francophones qui arrivent à éviter ces écueils je suis preneur.

30 January 2020

Dealing with Loss

Warning: This blog post contains a lot of talk about feelings, loss, and discussion of a suicide.

Recently, I have been thinking a lot about loss. My nephew died just a few months ago, after a short life with Duchennes Muscular Dystrophy. A neighbour recently took her own life, leaving a husband and two children behind. And today I learned that someone I have known for 15 years in the open source world recently passed away through a mailing list post. In each case, I have struggled with how to grieve.

My nephew has been ill for a long time, and we have been open in my family about taking advantage of opportunities we have to spend time with him for the past few years, because we knew he would not live much longer. And yet, death is always a surprise, and when we got a phone call one Saturday in November to let us know that he had passed away in his sleep, my first instincts were logistical. “I have a work trip coming up – when will the funeral service happen? Can I travel to Asia and get home in time, or do I need to cancel my trip? What is the cheapest way to get home? Who should travel with me?” When I got home, the funeral is a multi-day collective grieving, with neighbours, cousins, uncles and aunts arriving to pay their respects, express their condolences, spend time with the family. It was not until we were shovelling dirt on top of the casket that I really thought about the finality of the burial – I will never see my nephew again.

And yet, I was not overwhelmed with grief. I have never really known him intimately. How well do you know a child 25 years your younger, after you leave home and live abroad? How close of a connection do any thirty-somethings have with their teenage nieces and nephews? I second-guessed my emotions. Should I feel sadder? Is there an appropriate way to grieve? In the end, I decided to allow myself to feel the feelings I felt, and not to try to figure out whether I “should” be feeling differently. But avoiding self-judgement was difficult.

Last week, when we got the news about our neighbour, it hit me pretty hard. We knew the family well, had been to barbecues and play-off games in their house. I had coached basketball with her husband, one of their sons was in the team. Initially, we read that she had “passed away suddenly”, it was only through school bus stop gossip that we learned that she had committed suicide. We learned that she had been suffering from depression, that her life had not been easy for the past few months. I felt a great sadness, and also a little guilt. We had enjoyed her company in the past, but I knew nothing of her life. I was about to leave on a work trip, I would miss her memorial service and funeral. I was told that the ceremonies were very emotional, and really felt like the community coming together. The priest leading the service spoke openly about suicide and depression, and my wife said that his ceremony gave her a great sense of peace, removing the veil from some of the awkwardness that she felt around the topic. It gave the community an opportunity to start healing.

But I was not there. Now, I have all of these other thoughts about the appropriate way for me to grieve again. My instinct is to call to their house to express my condolences, but I am afraid to. This time, I find myself comparing my feelings to those of her family. I imagine how they must be feeling. Surely they are devastated, probably angry, maybe even feeling guilty. I think about her sons, the same age as two of my own sons, and I wonder what their lives will be like now. What right do I have to feel grief, or to impose on their grieving to express my feelings to them? How would I react, in the same circumstances, if this acquaintance called to the house a week after a funeral ceremony? And then, I also feel guilt. Sure, we didn’t know each other that well, but could I have been there for her in some way? Was there some way that we could have helped? I think about how alone she must have felt.

And now, today, I have learned of the death of someone I would have called a friend. Someone I would regularly meet at conferences, who I got along very well with professionally and personally, two or three times a year. I was not a part of his life, nor he a part of mine. I’ve found myself tearing up this morning thinking about our interactions, realizing that we will never meet again. And once more, I struggle to find the appropriate way to grieve.

I don’t know why I felt compelled to write this – I have debated saving it as a draft, deleting it, writing it in a private text file. But I am sharing it. I think I feel like I missed a part of my education in dealing with loss. I feel like many people missed that part of our education. Maybe by sharing, other people can share their feelings in comments and help me further my own education. Maybe by reading, others who struggle with dealing with loss will realise they’re not alone. Maybe it will achieve nothing more than helping me deal with my own feelings by verbalizing them. Let’s find out…

10 January 2020

Rust/GStreamer paid internship at Collabora

Collabora is offering various paid internship positions for 2020. We have a nice range of very cool projects involving kernel work, Panfrost, Monado, etc.

I'll be mentoring a GStreamer project aiming to write a Chromecast sink element in Rust. It would be a great addition to GStreamer and would give the student a chance to learn about our favorite multimedia framework but also about bindings between C GObject code and Rust.

So if you're interested don't hesitate to apply or contact me if you have any question.

08 August 2019

Ubuntu 18.04.3 LTS is out, including GNOME stable updates and Livepatch desktop integration

Ubuntu 18.04.3 LTS has just been released. As usual with LTS point releases, the main changes are a refreshed hardware enablement stack (newer versions of the kernel, xorg & drivers) and a number of bug and security fixes.

For the Desktop, newer stable versions of GNOME components have been included as well as a new feature: Livepatch desktop integration.

For those who aren’t familiar, Livepatch is a service which applies critical kernel patches without rebooting. The service is available as part of an Ubuntu Advantage subscriptions but also made available for free to Ubuntu users (up to 3 machines).  Fixes are downloaded and applied to your machine automatically to help reduce downtime and keep your Ubuntu LTS systems secure and compliant.  Livepatch is available for your servers and your desktops.

Andrea Azzarone worked on desktop integration for the service and his work finally landed in the 18.04 LTS.

To enabling Livepatch you just need an Ubuntu One account. The set up is part of the first login or can be done later from the corresponding software-properties tab.

Here is a simple walkthrough showing the steps and the result:

The wizard displayed during the first login includes a Livepatch step will help you get signed in to Ubuntu One and enable Livepatch:

Clicking the ‘Set Up’ button invites you to enter you Ubuntu One information (or to create an account) and that’s all that is needed.

The new desktop integration includes an indicator showing the current status and notifications telling when fixes have been applied.

You can also get more details on the corresponding CVEs from the Livepatch configuration UI

You can always hide the indicator using the toggle if you prefer to keep your top panel clean and simple.

Enjoy the increased security in between reboots!

 

 

 

08 July 2019

Bolt 0.8 update

Christian recently released bolt 0.8, which includes IOMMU support. The Ubuntu security team seemed eager to see that new feature available so I took some time this week to do the update.

Since the new version also featured a new bolt-mock utility and installed tests availability. I used the opportunity that I was updating the package to add an autopkgtest based on the new bolt-tests binary, hopefully that will help us making sure our tb3 supports stays solid in the futur 😉

The update is available in Debian Experimental and Ubuntu Eoan, enjoy!

23 March 2018

The Great Gatsby and onboarding new contributors

I am re-reading “The Great Gatsby” – my high-school son is studying it in English, and I would like to be able to discuss it with him with the book fresh in my mind –  and noticed this passage in the first chapter which really resonated with me.

…I went out to the country alone. I had a dog — at least I had him for a few days until he ran away — and an old Dodge and a Finnish woman, who made my bed and cooked breakfast and muttered Finnish wisdom to herself over the electric stove.

It was lonely for a day or so until one morning some man, more recently arrived than I, stopped me on the road.

“How do you get to West Egg village?” he asked helplessly.

I told him. And as I walked on I was lonely no longer. I was a guide, a pathfinder, an original settler. He had casually conferred on me the freedom of the neighborhood.

In particular, I think this is exactly how people feel the first time they can answer a question in an open source community for the first time. A switch is flipped, a Rubicon is crossed. They are no longer new, and now they are in a space which belongs, at least in part, to them.

13 June 2017

Synology PhotoStation password vulnerability

On Synology NAS, synophoto_dsm_user executable, part of PhotoStation package, was leaking NAS user password on the command line.

Using a simple shell loop to run "ps ax | grep synophoto_dsm_user", it was possible to get user and password credentials for user on the NAS who had PhotoStation enabled with their DSM credentials.

Fortunately, by default, shell access on the NAS is not available (by ssh or telnet), it has to be enabled by the admin.

Still, it is a bad practise to pass credentials to process using command line, which can be intercepted.

PhotoStation version 6.7.1-3419 or earlier is vulnerable. I've contacted Synology and they should release a security fix really shortly, as well as a CVE for it.

Update (June 13, 2017): Synology has released a CVE and the vulnerability is fixed in PhotoStation 6.7.2-3429 or later. Remember to update this package on your NAS !

27 February 2017

Hackweek projet: Let's Encrypt DNS-01 validation for acme.sh with Gandi LiveDNS

Last week was SUSE Hackweek and one of my projects was to get Let's Encrypt configured and working on my NAS.

Let's Encrypt is a project aimed at providing SSL certificates for free, in an automated way.

I wanted to get a SSL certificate for my Synology NAS. Synology now supports natively Let's Encrypt but only if the NAS accepts incoming HTTP / HTTPS connections (which is not always what you want).

Fortunately, the protocol used by Let's Encrypt to validate a hostname (and generate a certificate), Automatic Certificate Management Environment (ACME) has a alternative validation path, DNS-01, based on DNS.

DNS-01 requires access to your DNS server, so you can add a validation token used by Let's Encrypt server, to ensure you own the domain name you are requesting a certificate for.

There is a lot of ACME implementations, but very few supports DNS-01 validation with my DNS provider (gandi.net).

I ended-up using acme.sh, fully written in shell script and tried to plug Gandi DNS support in it.

After some tests, I discovered Gandi current DNS service is not allowing fast changing DNS zone informations (which is somehow a requirement for DNS-01 validation). Fortunately, Gandi is now providing a new LiveDNS server, available in beta, with a RESTful HTTP API.

I was able to get it working quite rapidly with curl, and once the prototype was working, I've cleaned everything and created a pull request for integrating the support in acme.sh.

Now, my NAS has its own Let's Encrypt certificate and will update it every 90 days automatically. Getting and installing a certificate for another server (running openSUSE Leap) only took me 5 minutes.

This was a pretty productive hackweek !

25 May 2016

GStreamer Spring Hackfest 2016

After missing the last few GStreamer hackfests I finally managed to attend this time. It was held in Thessaloniki, Greece’s second largest city. The city is located by the sea side and the entire hackfest and related activities were either directly by the sea or just a couple blocks away.

Collabora was very well represented, with Nicolas, Mathieu, Lubosz also attending.

Nicolas concentrated his efforts on making kmssink and v4l2dec work together to provide zero-copy decoding and display on a Exynos 4 board without a compositor or other form of display manager. Expect a blog post soon  explaining how to make this all fit together.

Lubosz showed off his VR kit. He implemented a viewer for planar point clouds acquired from a Kinect. He’s working on a set of GStreamer plugins to play back spherical videos. He’s also promised to blog about all this soon!

Mathieu started the hackfest by investigating the intricacies of Albanian customs, then arrived on the second day in Thessaloniki and hacked on hotdoc, his new fancy documentation generation tool. He’ll also be posting a blog about it, however in the meantime you can read more about it here.

As for myself, I took the opportunity to fix a couple GStreamer bugs that really annoyed me. First, I looked into bug #766422: why glvideomixer and compositor didn’t work with RTSP sources. Then I tried to add a ->set_caps() virtual function to GstAggregator, but it turns out I first needed to delay all serialized events to the output thread to get predictable outcomes and that was trickier than expected. Finally, I got distracted by a bee and decided to start porting the contents of docs.gstreamer.com to Markdown and updating it to the GStreamer 1.0 API so we can finally retire the old GStreamer.com website.

I’d also like to thank Sebastian and Vivia for organising the hackfest and for making us all feel welcomed!

GStreamer Hackfest Venue

25 May 2015

SUSE Ruling the Stack in Vancouver

Rule the Stack

Last week during the the OpenStack Summit in Vancouver, Intel organized a Rule the Stack contest. That's the third one, after Atlanta a year ago and Paris six months ago. In case you missed earlier episodes, SUSE won the two previous contests with Dirk being pretty fast in Atlanta and Adam completing the HA challenge so we could keep the crown. So of course, we had to try again!

For this contest, the rules came with a list of penalties and bonuses which made it easier for people to participate. And indeed, there were quite a number of participants with the schedule for booking slots being nearly full. While deploying Kilo was a goal, you could go with older releases getting a 10 minutes penalty per release (so +10 minutes for Juno, +20 minutes for Icehouse, and so on). In a similar way, the organizers wanted to see some upgrade and encouraged that with a bonus that could significantly impact the results (-40 minutes) — nobody tried that, though.

And guess what? SUSE kept the crown again. But we also went ahead with a new challenge: outperforming everyone else not just once, but twice, with two totally different methods.

For the super-fast approach, Dirk built again an appliance that has everything pre-installed and that configures the software on boot. This is actually not too difficult thanks to the amazing Kiwi tool and all the knowledge we have accumulated through the years at SUSE about building appliances, and also the small scripts we use for the CI of our OpenStack packages. Still, it required some work to adapt the setup to the contest and also to make sure that our Kilo packages (that were brand new and without much testing) were fully working. The clock result was 9 minutes and 6 seconds, resulting in a negative time of minus 10 minutes and 54 seconds (yes, the text in the picture is wrong) after the bonuses. Pretty impressive.

But we also wanted to show that our product would fare well, so Adam and I started looking at this. We knew it couldn't be faster than the way Dirk picked, and from the start, we targetted the second position. For this approach, there was not much to do since this was similar to what he did in Paris, and there was work to update our SUSE OpenStack Cloud Admin appliance recently. Our first attempt failed miserably due to a nasty bug (which was actually caused by some unicode character in the ID of the USB stick we were using to install the OS... we fixed that bug later in the night). The second attempt went smoother and was actually much faster than we had anticipated: SUSE OpenStack Cloud deployed everything in 23 minutes and 17 seconds, which resulted in a final time of 10 minutes and 17 seconds after bonuses/penalties. And this was with a 10 minutes penalty due to the use of Juno (as well as a couple of minutes lost debugging some setup issue that was just mispreparation on our side). A key contributor to this result is our use of Crowbar, which we've kept improving over time, and that really makes it easy and fast to deploy OpenStack.

Wall-clock time for SUSE OpenStack Cloud

Wall-clock time for SUSE OpenStack Cloud

These two results wouldn't have been possible without the help of Tom and Ralf, but also without the whole SUSE OpenStack Cloud team that works on a daily basis on our product to improve it and to adapt it to the needs of our customers. We really have an awesome team (and btw, we're hiring)!

For reference, three other contestants succeeded in deploying OpenStack, with the fastest of them ending at 58 minutes after bonuses/penalties. And as I mentioned earlier, there were even more contestants (including some who are not vendors of an OpenStack distribution), which is really good to see. I hope we'll see even more in Tokyo!

Results of the Rule the Stack contest

Results of the Rule the Stack contest

Also thanks to Intel for organizing this; I'm sure every contestant had fun and there was quite a good mood in the area reserved for the contest.

Update: See also the summary of the contest from the organizers.

12 May 2015

Deploying Docker for OpenStack with Crowbar

A couple of months ago, I was meeting colleagues of mine working on Docker and discussing about how much effort it would be to add support for it to SUSE OpenStack Cloud. It's been something that had been requested for a long time by quite a number of people and we never really had time to look into it. To find out how difficult it would be, I started looking at it on the evening; the README confirmed it shouldn't be too hard. But of course, we use Crowbar as our deployment framework, and the manual way of setting it up is not really something we'd want to recommend. Now would it be "not too hard" or just "easy"? There was only way to know that... And guess what happened next?

It took a couple of hours (and two patches) to get this working, including the time for packaging the missing dependencies and for testing. That's one of the nice things we benefit from using Crowbar: adding new features like this is relatively straight-forward, and so we can enable people to deploy a full cloud with all of these nice small features, without requiring them to learn about all the technologies and how to deploy them. Of course this was just a first pass (using the Juno code, btw).

Fast-forward a bit, and we decided to integrate this work. Since it was not a simple proof of concept anymore, we went ahead with some more serious testing. This resulted in us backporting patches for the Juno branch, but also making Nova behave a bit better since it wasn't aware of Docker as an hypervisor. This last point is a major problem if people want to use Docker as well as KVM, Xen, VMware or Hyper-V — the multi-hypervisor support is something that really matters to us, and this issue was actually the first one that got reported to us ;-) To validate all our work, we of course asked tempest to help us and the results are pretty good (we still have some failures, but they're related to missing features like volume support).

All in all, the integration went really smoothly :-)

Oh, I forgot to mention: there's also a docker plugin for heat. It's now available with our heat packages now in the Build Service as openstack-heat-plugin-heat_docker (Kilo, Juno); I haven't played with it yet, but this post should be a good start for anyone who's curious about this plugin.

15 August 2014

GNOME.Asia Summit 2014

Everyone has been blogging about GUADEC, but I’d like to talk about my other favorite conference of the year, which is GNOME.Asia. This year, it was in Beijing, a mightily interesting place. Giant megapolis, with grandiose architecture, but at the same time, surprisingly easy to navigate with its efficient metro system and affordable taxis. But the air quality is as bad as they say, at least during the incredibly hot summer days where we visited.

The conference itself was great, this year, co-hosted with FUDCon’s asian edition, it was interesting to see a crowd that’s really different from those who attend GUADEC. Many more people involved in evangelising, deploying and using GNOME as opposed to just developing it, so it allows me to get a different perspective.

On a related note, I was happy to see a healthy delegation from Asia at GUADEC this year!

Sponsored by the GNOME Foundation

25 March 2013

SPICE on OSX, take 2

A while back, I made a Vinagre build for OSX. However, reproducing this build needed lots of manual tweaking, the build was not working on newer OSX versions, and in the mean time, the recommended SPICE client became remote-viewer. In short, this work was obsolete.

I've recently looked again at this, but this time with the goal of documenting the build process, and making the build as easy as possible to reproduce. This is once again based off gtk-osx, with an additional moduleset containing the SPICE modules, and a script to download/install most of what is needed. I've also switched to building remote-viewer instead of vinagre

This time, I've documented all of this work, but all you should have to do to build remote-viewer for OSX is to run a script, copy a configuration file to the right place, and then run a usual jhbuild build. Read the documentation for more detailed information about how to do an OSX build.

I've uploaded a binary built using these instructions, but it's lacking some features (USB redirection comes to mind), and it's slow, etc, etc, so .... patches welcome! ;) Feel free to contact me if you are interested in making OSX builds and need help getting started, have build issues, ...

11 December 2012

FOSDEM 2013 Crossdesktop devroom Call for talks

The Call for talks for the Crossdesktop devroom at FOSDEM 2013 is getting to its end this Friday. Don't wait and submit your talk proposal about your favourite part of GNOME now!

Proposals should be sent to the crossdesktop devroom mailing list (you don't have to subscribe).

04 July 2011

Going to RMLL (LSM) and Debconf!

Next week, I’ll head to Strasbourg for Rencontres Mondiales du Logiciel Libre 2011. On monday morning, I’ll be giving my Debian Packaging Tutorial for the second time. Let’s hope it goes well and I can recruit some future DDs!

Then, at the end of July, I’ll attend Debconf again. Unfortunately, I won’t be able to participate in Debcamp this year, but I look forward to a full week of talks and exciting discussions. There, I’ll be chairing two sessions about Ruby in Debian and Quality Assurance.

17 February 2011

Recent Libgda evolutions

It’s been a long time since I blogged about Libgda (and for the matter since I blogged at all!). Here is a quick outline on what has been going on regarding Libgda for the past few months:

  • Libgda’s latest version is now 4.2.4
  • many bugs have been corrected and it’s now very stable
  • the documentation is now faily exhaustive and includes a lot of examples
  • a GTK3 branch is maintained, it contains all the modifications to make Libgda work in the GTK3 environment
  • the GdaBrowser and GdaSql tools have had a lot of work and are now both mature and stable
  • using the NSIS tool, I’ve made available a new Windows installer for the GdaBrowser and associated tools, available at http://www.gnome.org/~vivien/GdaBrowserSetup.exe. It’s only available in English and French, please test it and report any error.

In the next months, I’ll work on polishing even more the GdaBrowser tool which I use on a daily basis (and of course correct bugs).

16 March 2010

Webkit fun, maths and an ebook reader

I have been toying with webkit lately, and even managed to do some pretty things with it. As a consequence, I haven’t worked that much on ekiga, but perhaps some of my experiments will turn into something interesting there. I have an experimental branch with a less than fifty lines patch… I’m still trying to find a way to do more with less code : I want to do as little GObject-inheritance as possible!

That little programming was done while studying class field theory, which is pretty nice on the high-level principles and somewhat awful on the more technical aspects. I also read again some old articles on modular forms, but I can’t say that was “studying” : since it was one of the main objects of my Ph.D, that came back pretty smoothly…

I found a few minutes to enter a brick-and-mortar shop and have a look at the ebook readers on display. There was only *one* of them : the sony PRS-600. I was pretty unimpressed : the display was too dark (because it was a touch screen?), but that wasn’t the worse deal breaker. I inserted an SD card where I had put a sample of the type of documents I read : they showed up as a flat list (pain #1), and not all of them (no djvu) (pain #2) and finally, one of them showed up too small… and ended up fully unreadable when I tried to zoom (pain #3). I guess that settles the question I had on whether my next techno-tool would be a netbook or an ebook reader… That probably means I’ll look more seriously into fixing the last bug I reported on evince (internal bookmarks in documents).

16 January 2010

New Libgda releases

With the beginning of the year comes new releases of Libgda:

  • version 4.0.6 which contains corrections for the stable branch
  • version 4.1.4, a beta version for the upcoming 4.2 version

The 4.1.4’s API is now considered stable and except for minor corrections should not be modified anymore.

This new version also includes a new database adaptator (provider) to connect to databases through a web server (which of course needs to be configured for that purpose) as illustrated by the followin diagram:

WebProvider usage

The database being accessed by the web server can be any type supported by the PEAR::MDB2 module.

The GdaBrowser application now supports defining presentation preferences for each table’s column, which are used when data from a table’s column need to be displayed:
GdaBrowser table column's preferences
The UI extension now supports improved custom layout, described through a simple XML syntax, as shown in the following screenshot of the gdaui-demo-4.0 program:

Form custom layout

For more information, please visit the http://www.gnome-db.org web site.

05 November 2009

Attracted to FLT

I have been a little stuck for some weeks : a new year started (no, that post hasn’t been stuck since january — scholar year start in september) and I have students to tend to. As I have the habit to say : good students bring work because you have to push them high, and bad students bring work because you have to push them from low! Either way, it has been keeping me pretty busy.

Still, I found the time to read some more maths, but got lost on something quite unrelated to my main objective : I just read about number theory and the ideas behind the proof of Fermat’s Last Theorem (Taylor and Wiles’ theorem now). That was supposed to be my second target! Oh, well, I’ll just try to hit my first target now (Deligne’s proof of the Weil conjectures). And then go back to FLT for a new and deeper reading.

I only played a little with ekiga’s code — mostly removing dead code. Not much : low motivation.

11 July 2009

Slides from RMLL (and much more)

So, I’m back from the Rencontres Mondiales du Logiciel Libre, which took place in Nantes this year. It was great to see all those people from the french Free Software community again, and I look forward to seeing them again next year in Bordeaux (too bad the Toulouse bid wasn’t chosen).

The Debian booth, mainly organized by Xavier Oswald and Aurélien Couderc, with help from Raphaël, Roland and others (but not me!), got a lot of visits, and Debian’s popularity is high in the community (probably because RMLL is mostly for über-geeks, and Debian’s market share is still very high in this sub-community).

I spent quite a lot of time with the Ubuntu-FR crew, which I hadn’t met before. They do an awesome work on getting new people to use Linux (providing great docs and support), and do very well (much better than in the past) at giving a good global picture of the Free Software world (Linux != Ubuntu, other projects do exist and play a very large role in Ubuntu’s success, etc). It’s great to see Free Software’s promotion in France being in such good hands. (Full disclosure: I got a free mug (recycled plastic) with my Ubuntu-FR T-shirt, which might affect my judgement).

I gave two talks, on two topics I wanted to talk about for some time. First one was about the interactions between users, distributions and upstream projects, with a focus on Ubuntu’s development model and relationships with Debian and upstream projects. Second one was about voting methods, and Condorcet in particular. If you attended one of those talks, feedback (good or bad) is welcomed (either in comments or by mail). Slides are also available (in french):

On a more general note, I still don’t understand why the “Mondiales” in RMLL’s title isn’t being dropped or replaced by “Francophones“. Seeing the organization congratulate themselves because 30% of the talks were in english was quite funny, since in most cases, the english part of the talk was “Is there someone not understanding french? no? OK, let’s go on in french.“, and all the announcements were made in french only. Seriously, RMLL is a great (probably the best) french-speaking community event. But it’s not FOSDEM: different goals, different people. Instead of trying (and failing) to make it an international event, it would be much better to focus on making it a better french-speaking event, for example by getting more french-speaking developers to come and talk (you see at least 5 times more french-speaking developers in FOSDEM than in RMLL).

I’m now back in Lyon for two days, before leaving to Montreal Linux Symposium, then coming back to Lyon for three days, then Debconf from 23rd to 31st, and then moving to Nancy, where I will start as an assistant professor in september (a permanent (tenured) position).

22 July 2008

Looking for a job

On September I finish my studies of computer science, so I start to search a job. I really enjoyed my current job at Collabora maintaining Empathy, I learned lots of things about the Free Software world and I would like to keep working on free software related projects if possible. My CV is available online here.

Do you guys know any company around the free software and GNOME looking for new employees? You can contact me by email to xclaesse@gmail.com

22 April 2008

Enterprise Social Search slideshow

Enterprise Social Search is a way to search, manage, and share information within a company. Who can help you find relevant information and nothing but relevant information? Your colleagues, of course

Today we are launching at Whatever (the company I work for) a marketing campaign for our upcoming product: Knowledge Plaza. Exciting times ahead!

03 November 2007

git commit / darcs record

I’ve been working wit git lately but I have also missed the darcs user interface. I honestly think the darcs user interface is the best I’ve ever seen, it’s such a joy to record/push/pull (when darcs doesn’t eat your cpu) 🙂

I looked at git add --interactive because it had hunk-based commit, a pre-requisite for darcs record-style commit, but it has a terrible user interface, so i just copied the concept: running a git diff, filtering hunks, and then outputing the filtered diff through git apply --cached.

It supports binary diffs, file additions and removal. It also asks for new files to be added even if this is not exactly how darcs behave but I always forget to add new files, so I added it. It will probably break on some extreme corner cases I haven’t been confronted to, but I gladly accept any patches 🙂

Here’s a sample session of git-darcs-record script:

$ git-darcs-record
Add file:  newfile.txt
Shall I add this file? (1/1) [Ynda] : y

Binary file changed: document.pdf

Shall I record this change? (1/7) [Ynda] : y

foobar.txt
@@ -1,3 +1,5 @@
 line1
 line2
+line3
 line4
+line5

Shall I record this change? (2/7) [Ynda] : y

git-darcs-record
@@ -1,17 +1,5 @@
 #!/usr/bin/env python

-# git-darcs-record, emulate "darcs record" interface on top of a git repository
-#
-# Usage:
-# git-darcs-record first asks for any new file (previously
-#    untracked) to be added to the index.
-# git-darcs-record then asks for each hunk to be recorded in
-#    the next commit. File deletion and binary blobs are supported
-# git-darcs-record finally asks for a small commit message and
-#    executes the 'git commit' command with the newly created
-#    changeset in the index
-
-
 # Copyright (C) 2007 Raphaël Slinckx
 #
 # This program is free software; you can redistribute it and/or

Shall I record this change? (3/7) [Ynda] : y

git-darcs-record
@@ -28,6 +16,19 @@
 # along with this program; if not, write to the Free Software
 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.

+# git-darcs-record, emulate "darcs record" interface on top of a git repository
+#
+# Usage:
+# git-darcs-record first asks for any new file (previously
+#    untracked) to be added to the index.
+# git-darcs-record then asks for each hunk to be recorded in
+#    the next commit. File deletion and binary blobs are supported
+# git-darcs-record finally asks for a small commit message and
+#    executes the 'git commit' command with the newly created
+#    changeset in the index
+
+
+
 import re, pprint, sys, os

 BINARY = re.compile("GIT binary patch")

Shall I record this change? (4/7) [Ynda] : n

git-darcs-record
@@ -151,16 +152,6 @@ def read_answer(question, allowed_responses=["Y", "n", "d", "a"]):
        return resp

-def setup_git_dir():
-       global GIT_DIR
-       GIT_DIR = os.getcwd()
-       while not os.path.exists(os.path.join(GIT_DIR, ".git")):
-               GIT_DIR = os.path.dirname(GIT_DIR)
-               if GIT_DIR == "/":
-                       return False
-       os.chdir(GIT_DIR)
-       return True
-
 def git_get_untracked_files():

Shall I record this change? (5/7) [Ynda] : y

# On branch master
# Changes to be committed:
#   (use "git reset HEAD file..." to unstage)
#
#       modified:   document.pdf
#       modified:   foobar.txt
#       modified:   git-darcs-record
#       new file:   newfile.txt
#
# Changed but not updated:
#   (use "git add file file..." to update what will be committed)
#
#       modified:   git-darcs-record
#
What is the patch name? Some cute patch name
Created commit a08f34e: Some cute patch name
 4 files changed, 3 insertions(+), 29 deletions(-)
 create mode 100644 newfile.txt

Get the script here: git-darcs-record script and put in somewhere in your $PATH. Any comments or improvements is welcome !

22 January 2007

Un nouveau laptop, sans windows !

Voilà, j’y pensais depuis longtemps et c’est maintenant chose faite, je me suis acheté un tout nouveau ordinateur portable.

Je l’ai acheté sur le site français LDLC.com et me suis renseigné pour savoir si il était possible d’acheter les ordinateurs de leur catalogue sans logiciels (principalement sans windows). Je leur ai donc envoyé un email, et à ma grande surprise ils m’on répondu que c’était tout a fait possible, qu’il suffi de passer commande et d’envoyer ensuite un email pour demander de supprimer les logiciels de la commande. J’ai donc commandé mon laptop et ils m’ont remboursé de 20€ pour les logiciels, ce n’est pas énorme sur le prix d’un portable, mais symboliquement c’est déjà ça.

Toutes fois je me pose des questions, pourquoi cette offre n’est pas inscrite sur le site de LDLC ? En regardant sous mon tout nouveau portable je remarque une chose étrange, les restes d’un autocollant qu’on a enlevé, exactement à l’endroit où habituellement est collé la clef d’activation de winXP. Le remboursement de 20€ tout rond par LDLC me semble également étrange vue que LDLC n’est qu’un intermédiaire, pas un constructeur, et donc eux achètent les ordinateurs avec windows déjà installé. Bref tout ceci me pousse à croire que c’est LDLC qui perd les 20€ et je me demande dans quel but ?!? Pour faire plaisir aux clients libre-istes ? Pour éviter les procès pour vente liée ? Pour à leur tours se faire rembourser les licences que les clients n’ont pas voulu auprès du constructeur/Microsoft et éventuellement gagner plus que 20€ si les licences OEM valent plus que ça ? Bref ceci restera sans doutes toujours un mistère.

J’ai donc installé Ubuntu qui tourne plutôt bien. J’ai été même très impressionné par le network-manager qui me connecte automatiquement sur les réseaux wifi ou filaire selon la disponibilité et qui configure même un réseau zeroconf si il ne trouve pas de server dhcp, c’est très pratique pour transférer des données entre 2 ordinateurs, il suffi de brancher un cable ethernet (ça marche aussi par wifi mais j’ai pas encore testé) entre les 2 et hop tout le réseau est configuré automatiquement sans rien toucher, vraiment magique ! Windows peut aller se cacher, ubuntu est largement plus facile d’utilisation !

20 December 2006

Documenting bugs

I hate having to write about bugs in the documentation. It feels like waving a big flag that says ‘Ok, we suck a bit’.

Today, it’s the way fonts are installed, or rather, they aren’t. The Fonts folder doesn’t show the new font, and the applications that are already running don’t see them.

So I’ve fixed the bug that was filed against the documentation. Now it’s up to someone else to fix the bugs in Gnome.

05 December 2006

Choice and flexibility: bad for docs

Eye of Gnome comes with some nifty features like support for EXIF data in jpegs. But this depends on a library that isn’t a part of Gnome.

So what do I write in the user manual for EOG?

‘You can see EXIF data for an image, but you need to check the innards of your system first.’
‘You can maybe see EXIF data. I don’t know. Ask your distro.’
‘If you can’t see EXIF data, install the libexif library. I’m sorry, I can’t tell you how you can do that as I don’t know what sort of system you’re running Gnome on.’

The way GNU/Linux systems are put together is perhaps great for people who want unlimited ability to customize and choose. But it makes it very hard to write good documentation. In this sort of scenario, I would say it makes it impossible, and we’re left with a user manual that looks bad.

I’ve added this to the list of use cases for Project Mallard, but I don’t think it’ll be an easy one to solve.

Sources

Planète GNOME-FR

Planète GNOME-FR est un aperçu de la vie, du travail et plus généralement du monde des membres de la communauté GNOME-FR.

Certains billets sont rédigés en anglais car nous collaborons avec des gens du monde entier.

Dernière mise à jour :
09 December 2023 à 04:29 UTC
Toutes les heures sont UTC.

Colophon

Planète GNOME-FR est propulsée par l'agrégateur Planet, cron, Python, Red Hat (qui héberge ce serveur).

Le design du site est basé sur celui des sites GNOME et de Planet GNOME.

Planète GNOME-FR est maintenue par Frédéric Péters et Luis Menina. Si vous souhaitez ajouter votre blog à cette planète, il vous suffit d'ouvrir un bug. N'hésitez pas à nous contacter par courriel pour toute autre question.