Endless Orange Week

Earlier this month, I participated in Endless Orange Week, a program where the entire Endless team engages in projects designed to grow our collective learning related to our skills, work and mission, while our ordinary work responsibilities and projects were put on pause.

I used the opportunity to refresh my memory of how our graphical boot splash animation is implemented, to understand the requirements and limitations so we can refresh it.

Our current boot splash animation was introduced with Endless OS 3.1.3:

Endless OS boot animation from versions 3.1.3 to 4.0.x.

For some background, the graphics that appear during the boot process of Endless OS are created by the following sequence of actions:

  1. When the computer is turned on its firmware (aka BIOS) turns on the video card / screen etc and draws the manufacturer logo on screen; This happens before any piece of Endless OS has been loaded, and it is specific to each computer model / manufacturer. This image is also made available by the firmware to the OS via the BGRT ACPI table.
  2. Our first stage bootloader (shim / fallback / GRUB’s 1st stage) is loaded and executed by the firmware, and makes its best effort to not draw anything on screen, preserving what has been drawn on the previous step.
  3. The second stage bootloader (GRUB) is loaded and executed by the first stage, and in normal boots it again does not draw anything on screen. However, this is the point where the user can press ESC to show the boot menu (which is also automatically shown on dual-boot installations or if the previous boot failed). If the boot menu is shown, the screen is cleared to a blank black screen, and a text menu is drawn.
  4. The Linux kernel is loaded by the second stage bootloader and draws the “fallback framebuffer” on screen, which by default is the manufacturer logo obtained via BGRT. On normal boots this is unnoticeable, since the screen is already showing that exact same image. But if the boot menu has been shown, the graphics flow will be OEM logo -> text boot menu -> OEM logo again.
  5. The kernel loads the init program (systemd) which then starts all the different services needed to bring up the system’s userspace. One of the first services executed is Plymouth, which draws the boot splash animation on the screen and waits until the boot process finishes.
  6. One of the last pieces started during the boot process is GDM, which is responsible for providing the user account selection screen and spawning a new user session upon login. Once GDM is loaded, Plymouth carefully hands-off control of the screen to GDM, and we make sure Plymouth’s boot splash animation, and GDM’s greeter background match, so the transition is smooth.

While our current animation looks pretty good, there are a few drawbacks with it (and with the overall graphical boot):

  • This animation does not scale well on HiDPI screens: after scaling the logo would not be drawn from the edge of the screen.
  • It does not support having a progress bar for when updates are being applied (which for us actually happen for some run-after-update jobs, not actual system-updates per se).
  • While we hide the GRUB menu by default, when we do need to show it we use a text mode menu, and after the entry is selected the OEM logo gets displayed again.
  • The boot splash animation background uses a different color than upstream GDM, which means for a smooth transition we have been carrying patches to change GDM’s background color. There is also a noisy texture to match GDM’s background upstream from when the animation was implemented, but that noisy texture has been dropped upstream and we missed dropping it from our boot splash animation. Oops!

Most of this project’s time was spent investigating and experimenting around Plymouth, the software used to implement the boot splash animation, or more specifically, in Plymouth’s two-step plugin. Plymouth does not have much documentation, but luckily the code is very well organized and easy to read, so the lack of documentation did not pose a real problem.

With that, I now have a clear understanding of the requirements and limitations for implementing a boot splash animation, so I can schedule some time with our designer and help her brainstorm a new boot animation for Endless OS 5.

I have also already reverted the background of our current boot splash and greeter to the upstream solid grey and changed our kernel configuration to disable drawing the OEM logo between the bootloader and boot splash, which are some small improvements we can do right away and help polish the graphical boot a bit more.
Finally, while looking into Plymouth, I submitted a change upstream that we have been carrying downstream for 6 years, and twofixes for typos in comments I found while reading through the source code. Once we have a new boot splash animation in place, I believe we will be able to drop downstream changes Plymouth, making one less package for us to maintain ourselves at Endless.

FOSS haiku

knowledge is given
to those who carefully read
code blown in the wind

Sprint retrospective

As some of you may know, last June I’ve moved away from Campinas. Ten incredibly good years have passed since I’ve moved there, from the small Santana de Parnaíba, in the outskirts of São Paulo, where I grew up. During this time I’ve managed to major in Computer Engineering and also Computer Science afterwards, have learned 2 new languages (Spanish and Italian), tried a lot of different activities like rock climbing, circus (aerial acrobatics and fire spitting), kung-fu, between other things. This was also the period when I’ve made my world bigger, getting to know North America, Europe, and more Brazil and South America.

More important than that, tho, were the people I met along this way. There wouldn’t be enough bits on the internet to describe all the good experiences I’ve had and how much I’ve learned with them. Even those that have passed by shortly were partially responsible for constructing the person I am now. I’m very glad of all of those I’ve met on this phase of my life, and I have an enormous collection of good memories from this period, and no regrets that I can recall. You know who you are, thanks to all of you.

After a short period back at my parents’ place, I’m now on a new phase. In December I’ve moved to Recife, the capital of the state of Pernambuco, in the NE region of Brazil. I’ve moved here to work for INdT, a research institute of Nokia that works with a lot of different upstream free software projects. At first I’ve been helping our Webkit team on the Qt Webkit port, but now I’m back on BlueZ, the Linux Bluetooth stack. INdT has a lot of smart people, and a very good work environment. Life in Recife has been good so far, although there is a big cultural difference from Campinas and São Paulo. I live nearby the office, so I can bike to work every day, and during weekends there is always an amazing beach or small historical city to visit. Next sprint looks very promising.

Bluetooth Workshop at the Kernel Summit

Yesterday and today I’ve got the opportunity to attend the Bluetooth workshop, which is part of the Linux Kernel Summit. During the two days a lot of topics have been discussed, with highlights for the release of BlueZ 5, which will finally remove all the API marked as deprecated but still laying around. This includes the unix api, which means applications which dont yet use the D-Bus fd-passing API will have to upgrade it. Now.

Another important removal will be the ALSA modules for A2DP and HSP. I know some people still using it, despite the fact that PulseAudio bluetooth support is around for about 4 years now and its the recommended way to use both profiles. So now it’s serious, upgrade your installation or you are on your own from now. The gstreamer support will still remain on the tree but still, the recommended way to use these profiles is through PulseAudio.

Also a lot of discussion happened over the LE profiles (which are part of the Bluetooth 4.0 spec) and how to correctly add support for them in the right way and well integrated with the current BR/EDR profiles. There were also discussions on the AMP support and the MGMT interface, which will help to add support for that.

Finally we had an audio discussion together with some of the PulseAudio main hackers (Colin and Arun), which were around for the gstreamer conference.

I would like to thank the Linux Foundation for helping me attending this event. Seriously, you rock!

Desktop Summit and Linux Plumbers

The last couple of months have been very busy with regards to travelling (among other things I may, or may not, write about in upcoming posts), and because of that I’ve been really away from teh interwebs.

First of all I’ve got the opportunity to attend the really amazing Desktop Summit in Berlin. There I met a lot of very nice and interesting people, with highlights to Will Thompson (Collabora), who have been helping me on the OTR project on Telepathy; Lennart Poettering (Red Hat), who I’ve worked with in the past, when writing the PulseAudio bluetooth modules; Colin Guthrie, current PulseAudio maintainer, and lots of other very cool and smart people that were around there. I’ve also met some old friends from university who are now working at INdT, and other not-that-old friends from the GNOME community.

Besides all the networking and good talk on the hallways and social events (which were amazing, BTW), I had the awesome experience to help the conference organization as a volunteer, and highly recommend that. Not only because I got to meet a lot of cool people, or because I really felt well helping to build up this amazing conference. Also because it was very very fun, and I’m looking forward for this opportunity again next year (if I managed to attend 2012 GUADEC). I’ve also gave a lightning talk on my Telepathy OTR work, and surprisingly got a big applause from someone I couldn’t spot on the audience. Many thanks for that! 🙂

After that I’ve spent one more week in Europe, than back home for a couple of days, and than amazing sand dunes in a place called “Lençóis Maranhenses” with my girlfriend, on the NE of Brazil. And while there, I got an email from Lennart saying that my talk to the Linux Plumbers Conference was accepted! Yay! So I run to look for flights and about 1 week later I was flying to SFO to attend the LPC in Santa Rosa.

The LPC is very different from the Desktop Summit, beginning from its size: about 300 people, comparing to 800 on the DS. Additionally, most sessions on the LPC are very very technical, with a lot of discussion during the sessions. Also, there is much less sense of “we’re a community” if compared to the DS, probably because most projects are much smaller than the big window managers.

All the talks I’ve attended on the LPC were very good, and I had quick chats to a lot of different people between sessions. I’ve met better with Arun Raghavan (Collabora), which I was introduced to in the DS and is also one of the PulseAudio hackers; and also met Pierre-Louis Bossart (Intel), and experienced audio engineer whom I’ve exchanged emails about his module-loopback on pulse about 3 years ago, by the time I wrote the A2DP sink.

My talk about AVRCP on the Audio track of the LPC went all good, and I got some nice feedback on how to get the Linux desktop to be AVRCP 1.4 compliant. Stay tuned to read more on that in the future. And I also need to congrats the LPC organizing committee for such a great conference and awesome social events and catering during the event. On the last night I’ve meet with Jamey Sharp (Apters), XCB developer, and we had a really nice chat about travelling, e-book readers and so on.

After the event, I spent 3 really cool days in San Francisco with my friend Bruno Cardoso. Thanks for the good time, bro!

And last but definitely not least, I want to say big thanks for both the GNOME Foundation, for sponsoring my trip to the Desktop Summit, and the Linux Foundation, for sponsoring my trip to the Linux Plumbers Conference.

Berlin, here I go!

As pictures worth more than a thousand words…

I'm going to Desktop summit Berlin 2011Sponsored by GNOME Foundation

OTR over XMPP on Telepathy

On the last couple of weeks I’ve been working on adding support for the Off-the-Record protocol on the Telepathy communications framework, more precisely on Gabble, the XMPP connection manager (for those who this acronym doesn’t ring a bell, it’s the protocol spoken by the jabber and googletalk IM services, between others).

At the moment of this writing, OTR session establishment is working and it’s possible to exchange encrypted messages with any OTR-enabled IM client  that talks XMPP. A draft interface has been discussed on the Telepathy mailing list at the beginning of this work, but it already has seen some changes during development. Peer authentication is still missing, and it’s going to be added this coming week also working at this moment (as of July 24th). I’ll also start to work on the Empathy bits needed to expose this feature on the UI. The idea is to have this finished and submitted upstream for review by the end of this month.

I’m doing this work as GSoC project for the GNOME foundation, with the help of Will Thompson, a seasoned Telepathy hacker.

PyWeek(end) is over

Teamed up with Bruno Dilly, Leandro Pereira and Rafael Antognolli (colleagues at ProFUSION) I’ve participated on the April’s 2011 PyWeek (although we just had worked during the weekend). We’ve used the python bindings for the Enlightenment Foundation Libraries and have received the 3 awards, mostly referring to the difficulties people have found when trying to compile the dependencies to run the game. This was expected since just a very few distros package a recent version of the libs (just remember of gentoo for now) and we didn’t provide any windows package, just the source code. Also, we completely forgot to add in-game instructions on how to play (which were added later, when we’ve realized this fault), so the few who managed to run the game had a second hard time trying to figure what the game was all about (and here I have to give kudos to the guy who dig into the source to figure the gameplay).

But even with the problems mentioned above, I’m quite satisfied with the results. Most of us had zero experience with game development and we’ve managed to put the pieces together in one night with pizza and a few beers and one afternoon.

For more information you can visit our team’s page on the contest website: http://pyweek.org/e/Migueh/. The source code can be found on https://gitorious.org/nines-time. And bellow you can find a screenshot for your visual delight (congrats to acidx for most of the graphic work).

Screenshot of the gameplay

Paulo Leminski, O assassino era o escriba

Meu professor de análise sintática era o tipo do sujeito inexistente.
Um pleonasmo, o principal predicado de sua vida,
regular como um paradigma da 1ª conjunção.
Entre uma oração subordinada e um adjunto adverbial,
ele não tinha dúvidas: sempre achava um jeito
assindético de nos torturar com um aposto.
Casou com uma regência.
Foi infeliz.
Era possessivo como um pronome.
E ela era bitransitiva.
Tentou ir para os EUA.
Não deu.
Acharam um artigo indefinido na sua bagagem.
A interjeição do bigode declinava partículas expletivas,
conectivos e agentes da passiva o tempo todo.
Um dia, matei-o com um objeto direto na cabeça.

AVRCP Metadata

Quite some time has passed since last post and I want to update you on what have been making me busy lately. I’ve got accepted for GSoC2010 with BlueZ again this year I just passed the midterm evaluation. Following is the project I’m working on:

Add metadata and player status information exchange to bluetoothd’s AVRCP plugin

– Abstract –

Today the AVRCP — Audio and Video Remote Control Profile — plugin in bluetoothd supports only basic remote control commands (play, next, stop, etc.). Support for metadata and player status information about the stream being played over bluetooth is missing to fully comply with AVRCP 1.3. In addition, AVRCP 1.4 adds the ability of browsing content over bluetooth, so controlling a media player remotely becomes a richer experience for the end-user.

All of this is specially useful for devices with displays and more elaborated controls, like fancy headsets, cellphones, and car-kits. One of the most illustrative use-case of these technologies is a user which goes into its car with a bluetooth-enabled cellphone with a media player and his music/video collection in it and have a bluetooth car-kit with wheel-integrated controls. The user can then browse and play music/videos on the cellphone — which is possibly attached to a car-stand which also charges the phone’s battery and holds the phone in a position suitable for watching videos — and have meta-information about the media showing on the car-kit’s display without moving his hands from the wheel.

– Project Details –

AVRCP has two roles defined: Controller (CT) and Target (TG). Command frames are sent from the CT to the TG in order to remote control it. Metadata is sent from the TG to the CT. This project aims to add metadata and player status information support to bluetoothd, so it fully complies with AVRCP 1.3 TG specification. To accomplish this goal, we need some level of integration between the media players and bluetoothd.

Most media players supports or are in the process of adoption of a standard control interface via D-Bus called MPRIS — Media Player Remote Interfacing Specification [1] — which current version is 1.0. Discussion about its definition is happening on mpris mailing list [2]. Some players known to implement it, natively or via plugins, are Amarok, Audacious, BMPx, Corn, Dragon Player, Exaile, MPD, QMMP, Songbird, Totem, VLC, XMMS2. MPRIS defines a /Player object with methods not only for controlling players (play, next, pause, etc.) but also for retrieving metadata information (GetMetadata) and player status (GetStatus). Two signals provide this information too (TrackChange and StatusChange). In order to obtain metadata information and player status bluetoothd could call those methods or listen to those signals, and send the info for it AVRCP CT peer.

Although, this model has some limitations. The most obvious one comes from the D-Bus design: media players are started by users, so it’s D-Bus services lives on the session bus; ITOH bluetoothd is a system-wide daemon and can only connect to the system bus. Another issue is that bluetoothd can handle multiple devices concurrently. This means that the following scenario can happen: more than one audio sink connected to bluetoothd receiving different audio streams and willing to receive it’s respective metadata and player status at the same time. bluetoothd would have to map each media application and it’s MPRIS service to the respective audio stream to send the correct information to its peers.

Audio streaming from media applications to bluetoothd, in it’s recommended configuration, is handled by PulseAudio [3]. PulseAudio is a modern sound server which promotes a very complete integration between media applications and sound devices. Also, PulseAudio is a de-facto standard of how to access sound-devices on POSIX OSes, being an integral part of all relevant modern Linux distributions and used in various mobile devices by multiple vendors. PulseAudio is implemented as a user-wide daemon, meaning it can access methods and listen to signals on both the session and system buses. Also, it has information about which application is playing to which sound device. These characteristics makes it a very appropriate solution for the issues pointed before.

The proposed approach to implement this project is to have PulseAudio intermediating data exchange between media players and bluetoothd. For each audio stream going to a bluetooth audio sink it would listen to TrackChange and StatusChange signals coming from the application generating the stream going to that device and send this information to the remote peer through AVRCP calling a method exposed on bluetoothd’s org.bluez.Control interface. This integration could latter be extended for basic player controls, having PulseAudio to listen to bluetoothd signals and call methods on media players MPRIS interfaces to trigger the matching action. Also, a similar approach could be modeled for adding content browsing capability, as defined by AVRCP 1.4 specification.

– Have you worked on a Linux system before? –

I’ve been working with Linux since 2002, when I’ve started the Computer Engineering undergrad. In the beginning I just did the school projects on Linux, but since the middle of 2002 it’s my main system. By that time I had Conectiva on my home computer, RedHat on the school, and Slackware on my home router. After a few years I got in touch with Gentoo, and I liked so much the idea that it became my main distro. In addition, I’ve also started working on a project at University of Campinas, to make a Linux distribution for Itautec, a brazilian IT company, which was Gentoo based. Nowadays I’m using Ubuntu in my laptop, because of the convenience to install and no need of much tweaking, so I have more time to work with other projects, but I’m comfortable using and administrating any distribution.

– Have you contributed to a open source project? if yes, please provide the details –

My first contribution was a little bit frustrating, since I never got any answer (a two lines patch to python-dialog — http://sourceforge.net/projects/pythondialog/ — to support one dialog that was missing). After that I’ve made some performance improvements in ImageMagick and DevIL using OpenMP, both of them being accepted upstream. I’ve also completed two GSoC projects with BlueZ (PulseAudio A2DP modules and A2DP Sink). After that I’ve done the HFP integration into PulseAudio and I’m currently helping Gustavo Padovan with Enhanced Retransmission and Streaming modes into L2CAP, mostly reviewing and testing his implementation. Not related to bluetooth, lately I’ve done some contributions to the Enlightenment Foundation Libraries and E17 WM (fixes on ethumb and emotion, ofono module to E17), and to LightMediaScanner, a lightweight metadata scanner for media files.

– What is your educational Qualification (grad/under-grad) ? –

I’m a Computer Engineer, majored on University of Campinas (Unicamp), Brazil. I’ve started my master degree on Computer Science studies in 2008, on the same university. – Why do you want to do a project involving Bluetooth/BlueZ ? – I’ve already been a GSoC student with BlueZ and was a very good experience. I’ve worked with audio profiles, with most of the times are the profiles that work together with AVRCP, so it would be a kind of continuation from my previous projects, from the user-experience point of view. The initial motivation to choose BlueZ last year was because it’s something that I use on my daily activities and because I’m very enthusiastic about wireless devices and mobility. Besides that, I always had interest about network-related stuff.

– If your application is accepted will it be a part of your graduation process or it will it be just for hobby? –

If I get accepted to this project, it will not be part of my graduation process.

– Give us an *estimative* of your schedule (exams period, etc.) and how much time you would be able to dedicate to the project. –

The project has 2 main parts: integration with media applications, which consists on obtaining metadata and player status from the applications and passing this to bluetoothd, and bluetooth communication, which consists in implementing the “Procedure of Metadata Transfer” and its PDU, as defined in the AVRCP specification, inside bluetoothd’s AVRCP plugin. Both parts can be implemented independently, since the integration between both is going to be done through bluetoothd’s D-Bus API. I plan to start by the bluetooth communication part, so when doing the media applications integration I can call the already implemented AVRCP D-Bus methods. Then I’ll have the bluetooth communication task as the midterm milestone and the media applications integration as the final milestone.

[1] http://xmms2.org/wiki/MPRIS

[2] http://mailman.videolan.org/listinfo/mpris

[3] http://pulseaudio.org/