| Commit message (Collapse) | Author | Age | Lines |
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| | |
New root command `fbmark` which takes a buddy, a channel or a chat index as
an argument. Any messages from this buddy or in this chat will be marked
as read when the command `fbmark` is used.
|
|/
|
|
|
|
|
|
|
|
| |
The set `mark_read` now accepts `available` in addition to `true` and
`false`. When `mark_read` is set to `available` it will only mark
messages read when the user is not marked as away/invisible.
The set `mark_read_reply` was added, when this is set to `true` any
unread messages will be marked as read when the user replies them
(assuming they where received by bitlbee).
|
|
|
|
|
| |
This is mainly for sanity and consistency as the bee_user should be
zeroed.
|
| |
|
| |
|
|
|
|
|
|
|
| |
There is a report of the 'oh' parameter missing from some of the icon
images, which is used as the checksum. The solution is to simply use
the URL itself as the checksum in the *rare* case the 'oh' parameter
is missing.
|
| |
|
| |
|
| |
|
|
|
|
| |
It happens that < and > are special characters for gtk-doc.
|
|
|
|
|
|
| |
This fixes an improper GError propagation, which not is only incorrect,
but also has no effect. This ends up leading to the unsupported XMA
message being ignored without any notice to the user.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sometimes Facebook will sent a batch of duplicated messages over the
MQTT stream. There are occasions where Facebook will send duplicated
messages which are not sequential, however, it does not occur at the
rete of the sequential duplication. This is likely due to the fact that
the plugin is using an older revision of the Messenger protocol.
For now, we should attempt to ignore sequential duplicates from being
from being display. This fix is not bullet proof, but it is simple, and
should cut down on the duplicated message spam.
The proper fix is likely going to be to update the plugin to use a more
recent Messenger protocol revision.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Often times, the MQTT stream is disconnected by Facebook for whatever
reason. The only thing I can think of is some sort of load balancing on
Facebook's end. It has also been reported that when a user logs out on
the Facebook website, their MQTT connections are killed. Whenever the
connection is killed by Facebook, the user is able to reconnect right
after.
In order to make for a quieter experience, the plugin should attempt
to silently reconnect before notifying the user of an error. This is
done by relying on the sequence identifier and the message queue to
ensure everything remains synchronized for when the connection returns.
|
|
|
|
|
|
|
|
|
|
|
| |
This ensures a message is sent successfully before attempting to send
another message. As a result, messages are sent in their proper order,
instead of the order in which they arrive. This also introduces a check
for the successful sending of a message, rather than silently failing.
The queued sending also ensures messages are not lost when the state of
visibility is being switched. This also allows the plugin to silently
reconnect when a connection failure occurs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The plugin is required to read Thrift data for the presence states of
contacts. The data which is being read has some optional fields, which
are rarely not supplied. This has led to this bug being undiscovered
for quite some time.
Not only was the plugin not properly accounting for optional fields,
but also did not account for field scoping. This is not really an issue
until a Thrift list is being read, which will cause the identifier to
grow with each field read, rather than reset. The field identifier is
only relevant to its local scope, nothing more. More importantly, the
identifier must be reset with each iteration of a list.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds more functionality to the `group_chat_open` set, which would
only allow added group chats to be opened automatically. With this new
set value of 'all', any group chat will be automatically opened with an
incoming message.
Value overview:
- false: never open group chats automatically
- true: open added group chats automatically
- all: always open group chats automatically
|
|
|
|
| |
This is a regression introduced by 00c0ae8.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When fetching more than 500 contacts, multiple requests must be made in
order avoid request limits. When friends are retrieved and processed,
the plugin must keep a count of the processed contacts to know whether
or not it needs to retrieve more contacts.
Currently, this counter assumes every contact is a friend, which is not
always the case. This counter needs to advance even when a contact is
not a friend.
This fixes the incomplete fetching of the buddy list when a user has
over 500 friends.
This is a regression introduced by 00c0ae8.
|
| |
|
| |
|
|
|
|
| |
This is namely for the RedHat guys maintaining RHEL 6.
|
| |
|
|
|
|
| |
This regression was introduced by 00c0ae8.
|
|
|
|
|
|
|
|
|
|
|
| |
Unlike json_parser_load_from_data(), g_strndup() will not handle signed
sizes that are negative. This causes the size to overflow to a really
large value, and in turn lead to a segmentation fault.
The solution is simple: calculate the size of the data when the given
size is negative.
This bug was introduced by 0121bae.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Older glib versions didn't consider "$" to be a valid expression, and
threw this error:
Root node followed by invalid character '
(That's supposed to be '%c' with a \0)
Since this is possibly the simplest expression to parse, a g_strcmp0()
can do the job.
Thanks to advcomp2019 for reporting this bug and finding a test case
where this issue is reproducible every time (receiving events of people
joining or leaving in a groupchat)
Also thanks to EionRobb who realized what the bug was three hours ago
(and I didn't listen because I thought the previous bug was the same)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Older json-glib versions had a bug[1] in which the length parameter was
ignored and this error happened if the input was not null-terminated:
JSON data must be UTF-8 encoded
Since these versions are expected to still be around in some distros,
this commit makes a copy with g_strndup() to ensure that it's always
null terminated.
Thanks to advcomp2019 for reporting this bug and finding a test case
where this issue is reproducible every time (receiving events of people
joining or leaving in a groupchat)
[1]: https://bugzilla.gnome.org/show_bug.cgi?id=727755
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Changes made:
- Build and install bitlbee from /tmp
- Disabled the building of the bitlbee documentation
- Moved all build commands to travis.yml (more informative)
- Moved all Travis related scripts to a hidden directory
- Moved the bitlbee build commands to a script
- Only deploy the master branch (excluding pull requests)
- Removed redundant parameters from the bitlbee configure command
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Any call to fb_mqtt_write() can result in an error writing to the
socket, which means fb_mqtt_close() can be called and the mqtt object is
invalidated.
Trying to write priv->tev = 0 at that point is a small invalid write,
but not enough to make it crash. The real problem is fb_mqtt_timeout(),
which adds a 90 second delay after which it *does* crash, often when a
different account already finished logging.
The fix here takes advantage of the cleanup done by fb_mqtt_close() - by
adding the timeout before that call, it will find a nonzero priv->tev
and remove it.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was the only line that assigned anything to priv->wev, and it was
behind an (incorrect) condition that checked if it's nonzero.
That would have replaced priv->wev if the condition was ever true, but
since it wasn't, the only result is potentially delayed writes
(for example, filling the write buffer and only writing the rest a
minute later when pinging the server)
The new condition also checks the return value of fb_mqtt_cb_write(),
which is true if it should continue writing, or false otherwise.
|
|
|
|
|
|
|
|
| |
See https://github.com/travis-ci/travis-ci/issues/1975
Since the addon doesn't handle the job matrix very well, I copied one of
the suggestions from that thread, which does pretty much the same as the
addon but in a place we can control.
|
| |
|