This is a tutorial from CG Matter - PBR for IDIOTSvery useful and the principal idea is to understand the process and then Render an Image with your Textures.
for this process we need to go to this site: https://texturehaven.com/ then go to Textures and Download All Maps.
now you can create your Maps Images with AwesomeBump under Linux
In this line of work, we all stumble at least once upon a problem
that turns out to be extremely elusive and very tricky to narrow down
and solve. If we're lucky, we might have everything at our
disposal to diagnose the problem but sometimes that's not the
case – and in embedded development it's often not the
case. Add to the mix proprietary drivers, lack of debugging symbols, a
bug that's very hard to reproduce under a controlled environment,
and weeks in partial confinement due to a pandemic and what you have
is better described as a very long lucid nightmare. Thankfully,
even the worst of nightmares end when morning comes, even if sometimes
morning might be several days away. And when the fix to the problem is
in an inimaginable place, the story is definitely one worth
telling.
The problem
It all started with one
of Igalia's customers deploying
a WPE WebKit-based browser in
their embedded devices. Their CI infrastructure had detected a problem
caused when the browser was tasked with creating a new webview (in
layman terms, you can imagine that to be the same as opening a new tab
in your browser). Occasionally, this view would never load, causing
ongoing tests to fail. For some reason, the test failure had a
reproducibility of ~75% in the CI environment, but during manual
testing it would occur with less than a 1% of probability. For reasons
that are beyond the scope of this post, the CI infrastructure was not
reachable in a way that would allow to have access to running
processes in order to diagnose the problem more easily. So with only
logs at hand and less than a 1/100 chances of reproducing the bug
myself, I set to debug this problem locally.
Diagnosis
The first that became evident was that, whenever this bug would
occur, the WebKit feature known as web extension (an
application-specific loadable module that is used to allow the program
to have access to the internals of a web page, as well to enable
customizable communication with the process where the page contents
are loaded – the web process) wouldn't work. The browser would be
forever waiting that the web extension loads, and since that wouldn't
happen, the expected page wouldn't load. The first place to look into
then is the web process and to try to understand what is preventing
the web extension from loading. Enter here, our good friend GDB, with
less than spectacular results thanks to stripped libraries.
#0 0x7500ab9c in poll () from target:/lib/libc.so.6
#1 0x73c08c0c in ?? () from target:/usr/lib/libEGL.so.1
#2 0x73c08d2c in ?? () from target:/usr/lib/libEGL.so.1
#3 0x73c08e0c in ?? () from target:/usr/lib/libEGL.so.1
#4 0x73bold6a8 in ?? () from target:/usr/lib/libEGL.so.1
#5 0x75f84208 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#6 0x75fa0b7e in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#7 0x7561eda2 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#8 0x755a176a in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#9 0x753cd842 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#10 0x75451660 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#11 0x75452882 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#12 0x75452fa8 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#13 0x76b1de62 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#14 0x76b5a970 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#15 0x74bee44c in g_main_context_dispatch () from target:/usr/lib/libglib-2.0.so.0
#16 0x74bee808 in ?? () from target:/usr/lib/libglib-2.0.so.0
#17 0x74beeba8 in g_main_loop_run () from target:/usr/lib/libglib-2.0.so.0
#18 0x76b5b11c in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#19 0x75622338 in ?? () from target:/usr/lib/libWPEWebKit-1.0.so.2
#20 0x74f59b58 in __libc_start_main () from target:/lib/libc.so.6
#21 0x0045d8d0 in _start ()
From all threads in the web process, after much tinkering around it
slowly became clear that one of the places to look into is
that poll()
call. I will spare you the details related to what other threads were
doing, suffice to say that whenever the browser would hit the bug,
there was a similar stacktrace in one thread, going
through libEGL to a
call to poll() on top of the stack, that would never
return. Unfortunately, a stripped EGL driver coming from a proprietary
graphics vendor was a bit of a showstopper, as it was the inability to
have proper debugging symbols running inside the device (did you know
that a non-stripped WebKit library binary with debugging symbols can
easily get GDB and your device out of memory?). The best one could do
to improve that was to use the
gcore
feature in GDB, and extract a core from the device for post-mortem
analysis. But for some reason, such a stacktrace wouldn't give
anything interesting below the poll() call to understand
what's being polled here. Did I say this was tricky?
What polls?
Because WebKit is a multiprocess web engine, having system calls
that signal, read, and write in sockets communicating with other
processes is an everyday thing. Not knowing what a poll()
call is doing and who is it that it's trying to listen to, not
very good. Because the call is happening under the EGL library, one
can presume that it's graphics related, but there are still
different possibilities, so trying to find out what is this polling is
a good idea.
A trick I learned while debugging this is that, in absence of
debugging symbols that would give a straightforward look into
variables and parameters, one can examine the CPU registers and try to
figure out from them what the parameters to function calls are. Let's
do that with poll(). First, its signature.
int poll(struct pollfd *fds, nfds_t nfds, int timeout);
Now, let's examine the registers.
(gdb) f 0
#0 0x7500ab9c in poll () from target:/lib/libc.so.6
(gdb) info registers
r0 0x7ea55e58 2124766808
r1 0x1 1
r2 0x64 100
r3 0x0 0
r4 0x0 0
Registers r0, r1, and r2
contain poll()'s three
parameters. Because r1 is 1, we know that there is only
one file descriptor being polled. fds is a pointer to an
array with one element then. Where is that first element? Well, right
there, in the memory pointed to directly by
r0. What does struct pollfd look like?
struct pollfd {
int fd; /* file descriptor */
short events; /* requested events */
short revents; /* returned events */
};
What we are interested in here is the contents of fd,
the file descriptor that is being polled. Memory alignment is again in
our side, we don't need any pointer arithmetic here. We can
inspect directly the register r0 and find out what the
value of fd is.
(gdb) print *0x7ea55e58
$3 = 8
So we now know that the EGL library is polling the file descriptor
with an identifier of 8. But where is this file descriptor coming
from? What is on the other end? The /proc file system can
be helpful here.
# pidof WPEWebProcess
1944 1196
# ls -lh /proc/1944/fd/8
lrwx------ 1 x x 64 Oct 22 13:59 /proc/1944/fd/8 -> socket:[32166]
So we have a socket. What else can we find out about it? Turns out,
not much without
the unix_diag
kernel module, which was not available in our device. But we are
slowly getting closer. Time to call another good friend.
Where GDB fails, printf() triumphs
Something I have learned from many years working with a project as
large as WebKit, is that debugging symbols can be very difficult to
work with. To begin with, it takes ages to build WebKit with them.
When cross-compiling, it's even worse. And then, very often the
target device doesn't even have enough memory to load the symbols
when debugging. So they can be pretty useless. It's then when
just
using fprintf()
and logging useful information can simplify things. Since we know that
it's at some point during initialization of the web process that
we end up stuck, and we also know that we're polling a file
descriptor, let's find some early calls in the code of the web
process and add some
fprintf() calls with a bit of information, specially in
those that might have something to do with EGL. What can we find out
now?
Oct 19 10:13:27.700335 WPEWebProcess[92]: Starting
Oct 19 10:13:27.720575 WPEWebProcess[92]: Initializing WebProcess platform.
Oct 19 10:13:27.727850 WPEWebProcess[92]: wpe_loader_init() done.
Oct 19 10:13:27.729054 WPEWebProcess[92]: Initializing PlatformDisplayLibWPE (hostFD: 8).
Oct 19 10:13:27.730166 WPEWebProcess[92]: egl backend created.
Oct 19 10:13:27.741556 WPEWebProcess[92]: got native display.
Oct 19 10:13:27.742565 WPEWebProcess[92]: initializeEGLDisplay() starting.
Two interesting findings from the fprintf()-powered
logging here: first, it seems that file descriptor 8 is one known to
libwpe
(the general-purpose library that powers the WPE WebKit port). Second,
that the last EGL API call right before the web process hangs
on poll() is a call
to eglInitialize(). fprintf(),
thanks for your service.
Number 8
We now know that the file descriptor 8 is coming from WPE and is
not internal to the EGL library. libwpe gets this file descriptor from
the UI process,
as one
of the many creation parameters that are passed via IPC to the
nascent process in order to initialize it. Turns out that this file
descriptor in particular, the so-called host client file descriptor,
is the one that the freedesktop backend of libWPE, from here onwards
WPEBackend-fdo,
creates when a new client is set to connect to its Wayland display. In
a nutshell, in presence of a new client, a Wayland display is supposed
to create a pair of connected sockets, create a new client on the
Display-side, give it one of the file descriptors, and pass the other
one to the client process. Because this will be useful later on,
let's see how is
that currently
implemented in WPEBackend-fdo.
int pair[2];
if (socketpair(AF_UNIX, SOCK_STREAM | SOCK_CLOEXEC, 0, pair) < 0)
return -1;
int clientFd = dup(pair[1]);
close(pair[1]);
wl_client_create(m_display, pair[0]);
The file descriptor we are tracking down is the client file
descriptor, clientFd. So we now know what's going on in this socket:
Wayland-specific communication. Let's enable Wayland debugging next,
by running all relevant process with WAYLAND_DEBUG=1. We'll get back
to that code fragment later on.
A Heisenbug is a Heisenbug is a Heisenbug
Turns out that enabling Wayland debugging output for a few
processes is enough to alter the state of the system in such a way
that the bug does not happen at all when doing manual
testing. Thankfully the CI's reproducibility is much higher, so
after waiting overnight for the CI to continuously run until it hit
the bug, we have logs. What do the logs say?
WPEWebProcess[41]: initializeEGLDisplay() starting.
-> wl_display@1.get_registry(new id wl_registry@2)
-> wl_display@1.sync(new id wl_callback@3)
So the EGL library is trying to fetch the Wayland
registry and it's doing a wl_display_sync() call
afterwards, which will block until the server responds. That's
where the blocking poll() call comes from. So, it turns
out, the problem is not necessarily on this end of the Wayland socket,
but perhaps on the other side, that is, in the so-called UI process
(the main browser process). Why is the Wayland display not
replying?
The loop
Something that is worth mentioning before we move on is how the
WPEBackend-fdo Wayland display integrates with the system. This
display is a nested display, with each web view a client, while it is
itself a client of the system's Wayland display. This can be a bit
confusing if you're not very familiar with how Wayland works, but
fortunately there is
good
documentation about Wayland elsewhere.
The way that the Wayland display in the UI process of a WPEWebKit
browser is integrated with the rest of the program, when it uses
WPEBackend-fdo, is through the
GLib
main event loop. Wayland itself has an event loop implementation
for servers, but for a GLib-powered application it can be useful to
use GLib's and integrate Wayland's event processing with the different
stages of the GLib main loop. That is precisely how WPEBackend-fdo is
handling its clients' events. As discussed earlier, when a new client
is created a pair of connected sockets are created and one end is
given to Wayland to control communication with the
client. GSourceFunc
functions are used to integrate Wayland with the application main
loop. In these functions, we make sure that whenever there are pending
messages to be sent to clients, those are sent, and whenever any of
the client sockets has pending data to be read, Wayland reads from
them, and to dispatch the events that might be necessary in response
to the incoming data. And here is where things start getting really
strange, because after doing a bit of
fprintf()-powered debugging inside the Wayland-GSourceFuncs functions,
it became clear that the Wayland events from the clients were never
dispatched, because the dispatch()GSourceFunc was not being called,
as if there was nothing coming from any Wayland client. But how is
that possible, if we already know that the web process client is
actually trying to get the Wayland registry?
To move forward, one needs to understand how the GLib main loop
works, in particular, with Unix file descriptor sources. A very brief
summary of this is that, during an iteration of the main loop, GLib
will poll file descriptors to see if there are any interesting events
to be reported back to their respective sources, in which case the
sources will decide whether to trigger the dispatch()
phase. A simple source might decide in its dispatch()
method to directly read or write from/to the file descriptor; a
Wayland display source (as in our case), will
call wl_event_loop_dispatch() to do this for us.
However, if the source doesn't find any interesting events, or if
the source decides that it doesn't want to handle them,
the dispatch() invocation will not happen. More on the
GLib main event loop in
its API
documentation.
So it seems that for some reason the dispatch() method is not being
called. Does that mean that there are no interesting events to read
from? Let's find out.
System call tracing
Here we resort to another helpful
tool, strace. With strace
we can try to figure out what is happening when the main loop polls
file descriptors. The strace output is huge (because it
takes easily over a hundred attempts to reproduce this), but we know
already some of the calls that involve file descriptors from the code
we looked at above, when the client is created. So we can use those
calls as a starting point in when searching through the several MBs of
logs. Fast-forward to the relevant logs.
What we see there is, first, WPEBackend-fdo creating a new socket
pair (128, 130) and then, when file descriptor 130 is passed to
wl_client_create() to
create a new client, Wayland adds that file descriptor to its
epoll() instance
for monitoring clients, which is referred to by file descriptor 34. This way, whenever there are
events in file descriptor 130, we will hear about them in file descriptor 34.
So what we would expect to see next is that, after the web process
is spawned, when a Wayland client is created using the passed file
descriptor and the EGL driver requests the Wayland registry from the
display, there should be a POLLIN event coming in file
descriptor 34 and, if the dispatch() call for the source
was called,
a epoll_wait()
call on it, as that is
what wl_event_loop_dispatch()
would do when called from the source's dispatch()
method. But what do we have instead?
strace can be a bit cryptic, so let's explain
those two function calls. The first one is a poll in a series of file
descriptors (including 30 and 34) for POLLIN events. The
return value of that call tells us that there is a POLLIN
event in file descriptor 34 (the Wayland display epoll()
instance for clients). But unintuitively, the call right after is
trying to read a message from socket 30 instead, which we know
doesn't have any pending data at the moment, and consequently
returns an error value with an errno
of EAGAIN (Resource temporarily unavailable).
Why is the GLib main loop triggering a read from 30 instead of 34?
And who is 30?
We can answer the latter question first. Breaking on a running UI
process instance at the right time shows who is reading from
the file descriptor 30:
#1 0x70ae1394 in wl_os_recvmsg_cloexec (sockfd=30, msg=msg@entry=0x700fea54, flags=flags@entry=64)
#2 0x70adf644 in wl_connection_read (connection=0x6f70b7e8)
#3 0x70ade70c in read_events (display=0x6f709c90)
#4 wl_display_read_events (display=0x6f709c90)
#5 0x70277d98 in pwl_source_check (source=0x6f71cb80)
#6 0x743f2140 in g_main_context_check (context=context@entry=0x2111978, max_priority=, fds=fds@entry=0x6165f718, n_fds=n_fds@entry=4)
#7 0x743f277c in g_main_context_iterate (context=0x2111978, block=block@entry=1, dispatch=dispatch@entry=1, self=)
#8 0x743f2ba8 in g_main_loop_run (loop=0x20ece40)
#9 0x00537b38 in ?? ()
So it's also Wayland, but on a different level. This
is the Wayland client source (remember that the browser is also a
Wayland client?), which is installed
by cog (a thin browser
layer on top of WPE WebKit that makes writing browsers easier to do)
to process, among others, input events coming from the parent Wayland
display. Looking
at the cog code, we can see that the
wl_display_read_events()
call happens only if GLib reports that there is
a G_IO_IN
(POLLIN) event in its file descriptor, but we already
know that this is not the case, as per the strace
output. So at this point we know that there are two things here that
are not right:
A FD source with a G_IO_IN condition is not being dispatched.
A FD source without a G_IO_IN condition is being dispatched.
Someone here is not telling the truth, and as a result the main loop
is dispatching the wrong sources.
The loop (part II)
It is at this point that it would be a good idea to look at what
exactly the GLib main loop is doing internally in each of its stages
and how it tracks the sources and file descriptors that are polled and
that need to be processed. Fortunately, debugging symbols for GLib are
very small, so debugging this step by step inside the device is rather
easy.
Let's look at how the main loop decides which sources
to dispatch, since for some reason it's dispatching the wrong ones.
Dispatching happens in
the g_main_dispatch()
method. This method goes over a list of pending source dispatches and
after a few checks and setting the stage, the dispatch method for the
source gets called. How is a source set as having a pending dispatch?
This happens in
g_main_context_check(),
where the main loop checks the results of the polling done in this
iteration and runs the check() method for sources that
are not ready yet so that they can decide whether they are ready to be
dispatched or not. Breaking into the Wayland display source, I know
that
the check()
method is called. How does this method decide to be dispatched or
not?
In this lambda function we're returning TRUE or
FALSE, depending on whether the revents
field in
the GPollFD
structure have been filled during the polling stage of this iteration
of the loop. A return value of TRUE indicates the main
loop that we want our source to be dispatched. From
the strace output, we know that there is a
POLLIN (or G_IO_IN) condition, but we also know that the main loop is
not dispatching it. So let's look at what's in this GPollFD structure.
For this, let's go back to g_main_context_check() and inspect the array
of GPollFD structures that it received when called. What do we find?
That's the result of the poll() call! So far so good. Now the method
is supposed to update the polling records it keeps and it uses when
calling each of the sources check() functions. What do these records
hold?
We're not interested in the first record quite yet, but clearly
there's something odd here. The polling records are showing a
different value in the revent fields for both 30 and 34. Are these
records updated correctly? Let's look at the algorithm that is doing
this update, because it will be relevant later on.
pollrec = context->poll_records;
i = 0;
while (pollrec && i < n_fds)
{
while (pollrec && pollrec->fd->fd == fds[i].fd)
{
if (pollrec->priority <= max_priority)
{
pollrec->fd->revents =
fds[i].revents & (pollrec->fd->events | G_IO_ERR | G_IO_HUP | G_IO_NVAL);
}
pollrec = pollrec->next;
}
i++;
}
In simple words, what this algorithm is doing is to traverse
simultaneously the polling records and the GPollFD array,
updating the polling records revents with the results of
polling. From
reading how
the pollrec linked list is built internally, it's
possible to see that it's purposely sorted by increasing file
descriptor identifier value. So the first item in the list will have
the record for the lowest file descriptor identifier, and so on. The
GPollFD array is also built in this way, allowing for a
nice optimization: if more than one polling record – that is, more
than one polling source – needs to poll the same file descriptor,
this can be done at once. This is why this otherwise O(n^2) nested
loop can actually be reduced to linear time.
One thing stands out here though: the linked list is only advanced
when we find a match. Does this mean that we always have a match
between polling records and the file descriptors that have just been
polled? To answer that question we need to check how is the array of
GPollFD structures
filled. This
is done in g_main_context_query(), as we hinted
before. I'll spare you the details, and just focus on what seems
relevant here: when is a poll record not used to fill
a GPollFD?
Interesting! If a polling record belongs to a source whose priority
is lower than the maximum priority that the current iteration is
going to process, the polling record is skipped. Why is this?
In simple terms, this happens because each iteration of the main
loop finds out the highest priority between the sources that are ready
in the prepare() stage, before polling, and then only
those file descriptor sources with at least such a a priority are
polled. The idea behind this is to make sure that high-priority
sources are processed first, and that no file descriptor sources with
lower priority are polled in vain, as they shouldn't be
dispatched in the current iteration.
GDB tells me that the maximum priority in this iteration is
-60. From an earlier GDB output, we also know that there's a
source for a file descriptor 19 with a priority 0.
Since 19 is lower than 30 and 34, we know that this record is
before theirs in the linked list (and so it happens, it's the
first one in the list too). But we know that, because its priority is
0, it is too low to be added to the file descriptor array to be
polled. Let's look at the loop again.
pollrec = context->poll_records;
i = 0;
while (pollrec && i < n_fds)
{
while (pollrec && pollrec->fd->fd == fds[i].fd)
{
if (pollrec->priority <= max_priority)
{
pollrec->fd->revents =
fds[i].revents & (pollrec->fd->events | G_IO_ERR | G_IO_HUP | G_IO_NVAL);
}
pollrec = pollrec->next;
}
i++;
}
The first polling record was skipped during the update of
the GPollFD array, so the condition pollrec
&& pollrec->fd->fd == fds[i].fd is never going to
be satisfied, because 19 is not in the array. The
innermost while() is not entered, and as such
the pollrec list pointer never moves forward to the next
record. So no polling record is updated here, even if we have
updated revent information from the polling results.
What happens next should be easy to see. The check()
method for all polled sources are called with
outdated revents. In the case of the source
for file descriptor 30, we wrongly tell it there's a
G_IO_IN condition, so it asks the main loop to call
dispatch it triggering a a wl_connection_read() call in a
socket with no incoming data. For the source with file descriptor 34,
we tell it that there's no incoming data and
its dispatch() method is not invoked, even when on the
other side of the socket we have a client waiting for data to come and
blocking in the meantime. This explains what we see in
the strace output above. If the source with file
descriptor 19 continues to be ready and with its priority unchanged,
then this situation repeats in every further iteration of the main
loop, leading to a hang in the web process that is forever waiting
that the UI process reads its socket pipe.
The bug – explained
I have been using GLib for a very long time, and I have only fixed
a couple of minor bugs in it over the years. Very few actually,
which is why it was very difficult for me to come to accept that I
had found a bug in one of the most reliable and complex parts of the
library. Impostor syndrome is a thing and it really gets in the way.
But in a nutshell, the bug in the GLib main loop is that the very
clever linear update of registers is missing something very important:
it should skip to the first polling record matching before attempting
to update its revents. Without this, in the presence of a
file descriptor source with the lowest file descriptor identifier and
also a lower priority than the cutting priority in the current main
loop iteration, revents in the polling registers are not
updated and therefore the wrong sources can be dispatched. The
simplest patch to avoid this, would look as follows.
i = 0;
while (pollrec && i < n_fds)
{
+ while (pollrec && pollrec->fd->fd != fds[i].fd)
+ pollrec = pollrec->next;
+
while (pollrec && pollrec->fd->fd == fds[i].fd)
{
if (pollrec->priority <= max_priority)
Once we find the first matching record, let's update all consecutive
records that also match and need an update, then let's skip to the
next record, rinse and repeat. With this two-line patch, the web
process was finally unlocked, the EGL display initialized properly,
the web extension and the web page were loaded, CI tests starting
passing again, and this exhausted developer could finally put his mind
to rest.
A complete
patch, including improvements to the code comments around this
fascinating part of GLib and also a minimal test case reproducing the
bug have already been reviewed by the GLib maintainers and merged to
both stable and development branches. I expect that at
least some GLib sources will start being called in a
different (but correct) order from now on, so keep an eye on your
GLib sources. :-)
Standing on the shoulders of giants
At this point I should acknowledge that without the support from my
colleagues in the WebKit team in Igalia, getting to the bottom of this
problem would have probably been much harder and perhaps my sanity
would have been at stake. I want to
thank Adrián
and &Zcaronan for
their input on Wayland, debugging techniques, and for allowing me to
bounce back and forth ideas and findings as I went deeper into this
rabbit hole, helping me to step out of dead-ends, reminding me to use
tools out of my everyday box, and ultimately, to be brave enough to
doubt GLib's correctness, something that much more often than not I
take for granted.
Thanks also to Philip
and Sebastian for their
feedback and prompt code review!
There are many ways to change a GNOME Icon and for now MenuLibre is a good option if you want a quick process but in this case we need to understand where are the root for the icons and then create a change but understanding where are the icons that we want to change.
Installing MenuLibre
sudo apt-get install menulibre
As you can see the desktop icon Anthy Dictionary has a bad resolution and we will change it.
Understanding Menulibre show the command for this app "kasumi" so this is a very important word and then we go to Nautilus and find "kasumi" and we can find two different places from this Icons, kasumi.desktop and kasumi.list and other kasumi files.
Now we have to redesign in Inkscape the new Icon for Anthy Dictionary Editor.
Don't forget use Inkscape as a sudo for save the icon in two different places.
I was working in the design for the GNOME Stickers 2019 and the design you can find here and you can find the SVG. you are available to download and edit it with Inkscape.
Don't forget to download the font Trebuchet in the same link.
In the latest years, I have been working on Fluent Bit project making it a reliable system-level tool to solve most of the logging challenges that we face nowadays, and with a strong focus on cloud environments. This process has been a joint effort with the community of individuals and companies who are deploying it in production.
The whole point of Logging is to perform Data Analysis, so whatever makes it reliable, easier and flexible is a good addition to have; as a project maintainer I am always looking for innovation and the Stream Processing topic have got a lot of attention in my circle of colleagues and community in general.
Stream Processing (aka SP) can be described as the ability to perform data processing while it\'s still in motion. Most of the people who are familiar with the SP term knows about Apache Spark, Apache Flink and Kafka Streams within others. Most of this tooling provides a full set of data processing capabilities and helps to perform a flexible Data Analysis once the data is fully aggregated.
I mentioned above that Stream Processing happens once the data is aggregated, this means that different services are sending data from multiple local/remote sources and aggregating them in a central place so data processing and analysis can be performed. But what about if we could do distributed stream processing on the edge side? this would be very beneficial since we could catch exceptions or trigger alerts based on specific data processing results as soon as they happen.
To implement Stream Processing on the Edge we need the proper tooling that at least must have the following features:
Ability to collect, parse, filter and deliver data to remote hosts.
Lightweight: low memory and CPU footprint.
Provide a Query language to perform computation on top of streams of data.
Be Open Source (of course right ? )
Fluent Bit is a good fit since it\'s nature is data collection, processing, and data delivery, it\'s a good option to extend it with Stream Processing capabilities, and that\'s something that at Arm and Treasure Data we have been working in the last weeks (despite the idea was born on 2018).
Our current implementation will be showcased in the upcoming Fluent Bit v1.1.0 release on April 2019. It brings a Stream Processor Engine with SQL support to query records, run aggregation functions doing windowing and optional grouping. In addition, it also allows the creation of new streams of data using query results that can be tagged and routed as normal records of the Fluent Bit pipeline, e.g:
In the next part, I will be sharing details on how to get started with this new Stream Processing feature. As usual we are looking for your feedback...
I've just noticed the last post on my blog was almost a year ago!, I will try to fix that and post more often.
Fluent Bit on 2017 got a lot of traction, since people from the CloudNative-space started asking for more specific features and these were implemented, now we can see that Fluent Bit is deployed a few thousands of times each single day and having a real impact where it's solving logging pains at scale.
As a maintainer and core developer, I am very happy to see this traction from users, but also there is a community growth which honestly, without them, the project will not be rocking as it's doing today. End users around the project are an important piece which helps with contributions, troubleshooting and feedback to align roadmap in the right direction, so thank you all for your help and patience!
On 2017 as of today, we have done 27 releases where 3 of them are major releases and the other just bug fixes focusing on stability. We started the year with 0.10 major version and finalizing with 0.12 as next stable, 0.13 is just showing up in a development stage.
From a technical perspective Fluent Bit acquired the following features on this year:
File Output: write records to the file system (msgpack or JSON)
Extended Unit testing: internal routines and runtime tests
the list could be more extensive as there are many other improvements on each subsystem, all of this have been done thanks to the contribution of more than 30 people considering areas such as bug reporting, troubleshooting, code fixes and documentation within others.
Fluent Bit 0.13
This is the current development version and in addition to 0.12 the following features are already available:
New HTTP REST Interface:
Service information
JSON Metrics
Prometheus Metrics
More details about new stuff will be published at CloudNativeCon US!
Fluent Bit is a multi-platform Log Forwarder written in C
This year Fluent Bit got several new features, such as events routing, buffering, improved shared library mode and many new plugins to collect data and deliver to new destinations: in_tcp, in_forward, in_health, in_proc, out_http, out_influxdb, out_flowcounter, etc.
One of the recent features it got a lot of attention in the last CloudNativeCon, was the ability to extend it output destinations through Golang plugins, so Fluent Bit can load dynamically shared libraries created with Go, it's really neat.
Something I did not write too much about it, is the that the new version is fully running in Windows environment, the same code base works on Windows, it's portable (no Cygwin/MinGW), it can be compiled with Visual Studio without effort. This is still experimental but functional, I expect that for 0.11 release we have Windows binaries available and docs about it.
It have been exciting to see how the project have evolved and now is walking towards to be a Cloud Native Log Forwarder, there are some few missing features that are a priority for the beginning of this 2017 such as Filtering and Monitoring capabilities; they will come very soon.
Igalia is
hiring. We're currently interested in Multimedia
and Chromium
developers. Check the announcements for details on the
positions and our company.
It have been a while since the last post, so many good things happened. I will not dig into a full_detailed_post but some hints:
Fluent Bit project v0.9 have been released. Now working towards 0.10 which comes with Golang support for output plugins within other neat things. Shortly it will become the default Log Forwarder :).
Some weeks ago I attended the Embedded Linux Conference in San Diego, I've participate from the Showcase demonstrating Fluent Bit and a non-longer-secret-project that runs on top of it, more news in the incoming weeks.
The guys from NextThing Co were around giving for free C.H.I.Ps, for who's not aware about what it is, the C.H.I.P is a 9USD ARM Embedded Computer, it's pretty neat!
As soon you power up the device through the micro-USB, you can access the serial console and start playing with it; it comes with Debian 8, Wifi (AP mode supported), 512 MB Ram, 1GHz Processor, 4GB storage...within others (and for 9 dollars!). Definitely you should consider to get one!
About a year ago, Igalia was approached by the people
working on printing-related technologies in HP to see
whether we could give them a hand in their ongoing effort
to improve the printing experience in the web. They had
been working for a while in extensions for popular web
browsers that would allow users, for example, to distill a
web page from cruft and ads and format its relevant
contents in a way that would be pleasant to read in
print. While these extensions were working fine, they were
interested in exploring the possibility of adding this
feature to popular browsers, so that users wouldn't need
to be bothered with installing extensions to have an
improved printing experience.
That's how Alex, Martin, and me spent a few months
exploring the Chromium project and its printing
architecture. Soon enough we found out that the Chromium
developers had been working already on a feature that
would allow pages to be removed from cruft and presented
in a sort of reader mode, at least in mobile
versions of the browser. This is achieved through a
module called dom
distiller, which basically has the ability to traverse
the DOM tree of a web page and return a clean DOM tree
with only the important contents of the page. This module
is based on the algorithms and heuristics in a project
called boilerpipe with some of it also coming from the now
popular Readability. Our goal, then, was to integrate the
DOM distiller with the modules in Chromium that take care
of generating the document that is then sent to both the
print preview and the printing service, as well as making
this feature available in the printing UI.
After a couple of months of work and thanks to the kind
code reviews of the folks at Google, we got the feature
landed in Chromium's repository. For a while, though, it
remained hidden behind a runtime flag, as the Chromium
team needed to make sure that things would work well
enough in all fronts before making it available to all
users. Fast-forward to last week, when I found out by
chance that the runtime flag has been flipped and the
Simplify page printing option has been available
in Chromium and Chrome for a while now, and it has even
reached the stable releases. The reader mode
feature in Chromium seems to remain hidden behind a
runtime flag, I think, which is interesting considering
that this was the original motivation behind the dom
distiller.
As a side note, it is worth mentioning that the
collaboration with HP was pretty neat and it's a good
example of the ways in which Igalia can help organizations
to improve the web experience of users. From the standards
that define the web to the browsers that people use in
their everyday life, there are plenty of areas in which
work needs to be done to make the web a more pleasant
place, for web developers and users alike. If your
organization relies on the web to reach its users, or to
enable them to make use of your technologies, chances are
that there are areas in which their experience can be
improved and that's one of the things we love doing.
Puente Johnson de noche. Victoria, British Columbia, Canadá.
Hace tiempo que quería tomar esta fotografía. Esta noche había una lluvia ligera, estaba de ánimo para tomar fotos de larga exposición y tenía un trípode conmigo.
Aunque icónico, este puente azul será reemplazado por uno nuevo el 2017. La mitad del puente estará dedicado a ciclistas y peatones :-)
It's summer! That means that, if you are a student,
you could be one of our summer interns in Igalia this
season. We have two positions available: the first
related to WebKit work and the second to web
development. Both positions can be filled in either of
our locations in Galicia or you can work remotely from
wherever you prefer (plenty of us work remotely, so
you'll have to communicate with some of us via jabber
and email anyway).
Have a look at the announcement
in our web page for more details, and don't hesitate
to contact me if you have any doubt about the
internships!
So this is a quick update on Giselle post on SoC. (see here) There you could see a screencast of highlighting working but with a big lag. It turns out that the lag was related to a memory leak on the cache engine that only was triggered when doing the "animation" in the drag-and-drop of the highlighting annotation. Giselle luckily fixed this already, and I managed to spend some time adding new API to poppler to render the annotation directly, without need of reloading the page as we were force to before. Thus, we get an even smoother animation, as you can see here.
Y claro, para continuar aquí va el tutorial de "Procesamiento de Imágenes en Linux usando OpenCV" que presenté en la FUDCon Santiago 2010 y en la Expolibre en Talca:
Agregué a la charla los enlaces actuales a la página web de OpenCV y su documentación para la API de C++, y una introducción con una descripción simple a los ejemplos de código.
In the last year or so, Jhbuild has received a lot of love and know it is much easier to use (for the first time) than it used to, but how much? Since Google Summer of Code is coming, I wondered whether was easy for a person with no experience building GNOME to build Evince. It turns out it is not that trivial, although it is quite better than the first time I managed to compile Evince using jhbuild. Since the Jhbuild manual seems to be still a little outdated, I publish here the steps I needed, on a fresh install of Ubuntu 12.04.
Install all the things jhbuild won't install for you.
Search for the line starting with "prefix" and change the directory to a writable one (the default /opt/gnome is not writable by default on ubuntu it seems)
For evince, you need to add the following lines to ~/.jhbuildrc
Install dependencies using the new "sysdeps" command of jhbuild
$jhbuild sysdeps --install
Building evince
$jhbuild build evince --ignore-suggests --nodeps
NOTE: For the time being, this will give you an error because jhbuild won't compile libsecret when compiling using --ignore-suggests. See this bug for more info. If it fails for you with a LIBSECRET not found error, then start a shell, do
$jhbuild buildone libsecret
and then resume the compilation of evince. If you want, you could do, at this stage
$jhbuild buildone evince
to trigger just the compilation of evince and NONE of its dependencies. We use --ignore-suggests to ignore compiling a lot of modules that are not really important for developping with evince. If everything goes right, you should have evince 3.8 compiled. If you want master, you need to choose the "gnome-suites-core-3.10" moduleset
One of the oldest bug in evince is the ability to zoom-in a large scales, like 1600% or more. This happens because the Cache that render the pages can only render full pages and rendering full pages at large scales eats a lot of memory. So some days ago I started to work on making the cache in evince "tile" aware, that is, to render only portions of a full page. I am happy to say that a preliminary result can be found in my evince repo in github (github.com/jaliste/evince) in the wip/tiling_manager branch. Right now a lot of things still don't work, but tile rendering in single and continuous mode are working (not in dual mode yet)...
In the video above, you can see evince rendering a page using 256 tiles. The red tiles only appear while there is no rendered tile, and are just there to test the code.
Note that the rendering 256 tiles is quite slow because Poppler will reparse the pdf file each time we render a tile... So why bother? In fact, the video is only for ilustration, ideally evince will use tiles of the size of the screen or around that size, so you only get to see a few tiles at a time. The main advantage of this code is that now we can easily implement larger zoom modes. The next picture shows Evince running the Pdf 1.7 reference document at 1600% using 256 tiles for each page. Memory used by evince is around 150 M with this.
Unfortunately, there are a lot of things that still don't work...selections, dual_page mode, and other... Also since this is a main refactor/rework of the Cache code, it will need a lot of testing and review before it gets merged into master (as new code it is likely to have more bugs and we don't want to evince to be unstable) ... but we are right in the begining of the 3.7 cycle, so I hope I can get this merged into evince 3.8.
So in the last two weeks I have been for the first time in Canada. I was first some days at the Banff Centre, which is awesome, and spent last weekend in Montreal for the GNOME Montreal Summit.
This was my first international GNOME (no)conference, and it was great to finally meet people like Colin, Owen, Ryan, Cosimo, Karen, etc.
I discussed and asked a lot of questions to Ryan, Cosimo and Colin about some ideas of the future of Evince related technologies. For instance, we agreed with Cosimo that it probably makes sense to have a evince-based plugin for your browser (finally!!) because it could be better for the workflow where you don't want to download the document (and you don't need to). I also asked Ryan and Colin for ways of making evince safer by splitting the rendering code into a sandboxed process. And I asked many other things with cosimo about evince, gtk, css.
I also participated in some sessions like the jhbuild session, the GSoC session, and I even organized a short session about online metadata for our desktop. Although not many people were excited about my ideas, it was nevertheless great to have the feedback of many talented hackers in GNOME.
I also discussed with Andreas the idea of getting some cool Laptop Skins with GNOME designs, so hopefully he will get some cool designs soon, and with Marina I discussed about Women outreach and how the gnome-chile community is working to promote GNOME and all these programs.
Overall, it was really great for me to meet all the prople, and I thank all the people i met for all their feedback, surely I have now a lot of ideas about how to improve evince... if only i would have more time. The GNOME Foundation for sponsornig my trip and to an anonymous friend for letting me crash at his couch on Montreal.
Time to go into the plain.... some 10 hours ahead to be back in a sunny Santiago again
This is just a quick note to say that I am going to the Montreal GNOME Summit. Contrary that for many the move from Boston to Montreal was bad, I just happened to be in Canada on the same dates, so for me it was a lucky move! I am looking forward to meet some GNOME devs for the first time on live.
This week’s Bug Day target is *drum roll please* GNOME Power Manager!
* 50 New bugs need a hug
* 50 Incomplete bugs need a status check
* 50 Confirmed bugs need a review
Are you looking for a way to start giving some love back to your
adorable Ubuntu Project?
Did you ever wonder what Triage is? Want to learn about that?
This is a perfect time!, Everybody can help in a Bug Day!
open your IRC Client and go to #ubuntu-bugs (FreeNode) the BugSquad will
be happy to help you to start contributing!
Wanna be famous? Is easy! remember to use 5-A-day so if you do a good
work your name could be listed at the top 5-A-Day Contributors in the
Ubuntu Hall of Fame page!
Are you looking for a way to start giving some love back to your
adorable Ubuntu Project?
Did you ever wonder what Triage is? Want to learn about that?
This is a perfect time!, Everybody can help in a Bug Day!
open your IRC Client and go to #ubuntu-bugs (FreeNode) the BugSquad will
be happy to help you to start contributing!
Wanna be famous? Is easy! remember to use 5-A-day so if you do a good
work your name could be listed at the top 5-A-Day Contributors in the Ubuntu Hall of Fame page!
Last week we had an extraordinary Bug Day for the Operation Cleansweep and as you probably know yesterday we organized a Bug Day for Banshee the Multimedia Player and guess what happened…? it was *amazing*!. If you look at the Bug Day Page you’ll notice that there’s no white spots… only green rows!, I can’t recall the last bug day where we had all the bugs marked off the lists, and as a picture (in this case graph) says more than a thousand words, let me show you the graph of that bug day:
sweet isn’t?
Thanks a lot to our rocking contributors! and stay tune for next week Bug Day especially if you’re a translator, I’ve heard that David Planella is planning one for the Ubuntu Translations project!.
Finalmente, gracias a la intervención de Reynaldo :-), me di el tiempo de redondear la charla y los ejemplos de la introducción a Gtkmm que presenté en la FUDCon Santiago 2010, así que aquí está:
The event is going to have a lot of interesting talks like How to create GDM Artwork by the awesome Daniel Galleguillos and a couple of Tracker talks by the amazing Ivan Frade.
Día GNOME 2008, Photo by Germán Poó Caamaño
I’m sure you don’t want to miss it, it’s a free event , you only need to register yourself here, what are you waiting for? Join us!. See you in Valparaíso!