0

I'm running ejabberd 19.09.1 using the official Docker image, configured for anonymous authentication with mod_muc. Clients generally connect to the server in the browser, through a WebSocket endpoint. ejabberd is sitting behind an nginx reverse proxy.

When a client disconnects uncleanly (e.g. by killing the browser tab), I immediately see a message in the log file like:

ejabberd_1  | 15:39:04.917 [info] (websocket|<0.591.0>) Closing c2s session for 491099311875587266962@localhost/7150882238488846509978: Connection failed: connection closed

However, in the MUC room, the disconnected user appears to still be online indefinitely (i.e. there is no timeout). No unavailable presence is sent to the room until someone else joins, leaves, or sends a group message, at which point the unavailable presence suddenly appears and the user is shown to be offline. Sending private messages between users, other than the one who disconnected, has no effect.

I was previously using ejabberd 16.09 on Debian Jessie, where this behavior did not occur - presence updates were instant, even on unclean disconnects.

Here are the contents of my ejabberd.yml file:

---
## loglevel: Verbosity of log files generated by ejabberd
## 0: No ejabberd log at all (not recommended)
## 1: Critical
## 2: Error
## 3: Warning
## 4: Info
## 5: Debug
loglevel: 4

## rotation: Disable ejabberd's internal log rotation, as the Debian package
## uses logrotate(8).
log_rotate_count: 0
log_rotate_date: ""

## hosts: Domains served by ejabberd.
## You can define one or several, for example:
## hosts:
##   - "example.net"
##   - "example.com"
##   - "example.org"
hosts:
  - "localhost"

certfiles:
  - "/etc/ejabberd/ejabberd.pem"
  ## - "/etc/letsencrypt/live/*/*.pem"

## TLS configuration
define_macro:
  'TLS_CIPHERS': "HIGH:!aNULL:!eNULL:!3DES:@STRENGTH"
  'TLS_OPTIONS':
    - "no_sslv3"
    - "no_tlsv1"
    - "no_tlsv1_1"
    - "cipher_server_preference"
    - "no_compression"
    ## 'DH_FILE': "/path/to/dhparams.pem"
    ## generated with: openssl dhparam -out dhparams.pem 2048

c2s_ciphers: 'TLS_CIPHERS'
s2s_ciphers: 'TLS_CIPHERS'
c2s_protocol_options: 'TLS_OPTIONS'
s2s_protocol_options: 'TLS_OPTIONS'
## c2s_dhfile: 'DH_FILE'
## s2s_dhfile: 'DH_FILE'

listen:
  -
    port: 5280
    ip: "0.0.0.0"
    module: ejabberd_http
    request_handlers:
      ##"/api": mod_http_api
      "/http-bind": mod_bosh
      ## "/upload": mod_http_upload
      "/websocket": ejabberd_http_ws
    captcha: false
    register: false
    tls: false
    protocol_options: 'TLS_OPTIONS'
    web_admin: false

websocket_ping_interval: 10
websocket_timeout: 60

## Disabling digest-md5 SASL authentication. digest-md5 requires plain-text
## password storage (see auth_password_format option).
disable_sasl_mechanisms:
  - "digest-md5"
  - "X-OAUTH2"

s2s_use_starttls: required

## Store the plain passwords or hashed for SCRAM:
auth_password_format: scram

##
## Anonymous login support:
auth_method: anonymous
anonymous_protocol: both
allow_multiple_connections: true

## Full path to a script that generates the image.
## captcha_cmd: "/usr/share/ejabberd/captcha.sh"

acl:
  admin:
     user:
       - ""

  local:
    user_regexp: ""
  loopback:
    ip:
      - "127.0.0.0/8"
      - "::1/128"
      - "::FFFF:127.0.0.1/128"

access_rules:
  local:
    - allow: local
  c2s:
    - deny: blocked
    - allow
  announce:
    - allow: admin
  configure:
    - allow: admin
  muc_create:
    - allow: local
  pubsub_createnode:
    - allow: local
  register:
    - allow
  trusted_network:
    - allow: loopback

api_permissions:
  "console commands":
    from:
      - ejabberd_ctl
    who: all
    what: "*"
  "admin access":
    who:
      - access:
          - allow:
            - acl: loopback
            - acl: admin
      - oauth:
        - scope: "ejabberd:admin"
        - access:
          - allow:
            - acl: loopback
            - acl: admin
    what:
      - "*"
      - "!stop"
      - "!start"
  "public commands":
    who:
      - ip: "127.0.0.1/8"
    what:
      - "status"
      - "connected_users_number"

shaper:
  normal: 1000
  fast: 50000

shaper_rules:
  max_user_sessions: 10
  max_user_offline_messages:
    - 5000: admin
    - 100
  c2s_shaper:
    - none: admin
    - normal
  s2s_shaper: fast

modules:
  ## mod_adhoc: {}
  mod_admin_extra: {}
  ## mod_announce:
    ## access: announce
  ## mod_avatar: {}
  ## mod_blocking: {}
  mod_bosh: {}
  ## mod_caps: {}
  ## mod_carboncopy: {}
  ## mod_client_state: {}
  ## mod_configure: {}
  ## mod_delegation: {}   # for xep0356
  ## mod_disco: {}
  ## mod_echo: {}
  ## mod_fail2ban: {}
  ## mod_http_api: {}
  ## mod_http_upload:
  ##   put_url: "https://@HOST@:5443/upload"
  ## mod_last: {}
  ## mod_mam:
  ##   ## Mnesia is limited to 2GB, better to use an SQL backend
  ##   ## For small servers SQLite is a good fit and is very easy
  ##   ## to configure. Uncomment this when you have SQL configured:
  ##   ## db_type: sql
  ##   assume_mam_usage: true
  ##   default: always
  mod_muc:
    access:
      - allow
    access_admin:
      - allow: admin
    access_create: muc_create
    access_persistent: muc_create
    default_room_options:
      mam: false
      presence_broadcast: [moderator, participant, visitor]
  mod_muc_admin: {}
  mod_offline:
    bounce_groupchat: true
    access_max_user_messages: 1
  mod_ping:
    send_pings: true
    ping_interval: 15
    ping_ack_timeout: 30
    timeout_action: kill
  ## mod_pres_counter:
    ## count: 5
    ## interval: 60
  ## mod_privacy: {}
  ## mod_private: {}
  ## mod_proxy65: {}
  ## mod_pubsub:
    ## access_createnode: pubsub_createnode
    ## plugins:
      ## - "flat"
      ## - "pep"
    ## force_node_config:
      ## "eu.siacs.conversations.axolotl.*":
        ## access_model: open
      ## Avoid buggy clients to make their bookmarks public
      ## "storage:bookmarks":
        ## access_model: whitelist
  ## mod_push: {}
  ## mod_push_keepalive: {}
  ## mod_register:
  ##   ## Only accept registration requests from the "trusted"
  ##   ## network (see access_rules section above).
  ##   ## Think twice before enabling registration from any
  ##   ## address. See the Jabber SPAM Manifesto for details:
  ##   ## https://github.com/ge0rg/jabber-spam-fighting-manifesto
  ##   ip_access: trusted_network
  ## mod_roster:
    ## versioning: true
  ## mod_s2s_dialback: {}
  ## mod_shared_roster: {}
  ## mod_sic: {}
  ## mod_vcard:
    ## search: false
  ## mod_vcard_xupdate: {}
  ## mod_version: {}

I would greatly appreciate if someone could point out any flaws in my configuration that might be causing this problem.

1 Answers1

0

I don't have any websocket client to replicate that behavior. But I have Gajim client, which suports BOSH.

I login Gajim client using BOSH, and join a room where there are other occupants. Then I kill Gajim client abruptly. Nothing is logged in ejabberd logs or chatroom.

After 30 seconds, the log shows:

17:35:13.638 [info] (http_bind|<0.558.0>)
 Closing c2s session for user2@localhost/gajim.MQSPY3HC:
 Connection failed: connection closed

And immediately the other room occupants receive the presence unavailable.

I wonder if you get the problem when using BOSH too, or it's strictly related to websocket.

Badlop
  • 540
  • 3
  • 5
  • Same problem with BOSH, except I get this log message after 30 seconds: `03:11:12.151 [info] (http_bind|<0.1297.0>) Closing c2s session for 44187387317082057262818@localhost/124768794887832300882834: Connection failed: ping_timeout` Still no unavailable presence until an active participant sends a message. I confirmed that version 16.09 has the expected behavior with both BOSH and WebSocket. – Jonathan Moore Jan 21 '20 at 03:12
  • I've installed ejabberd 16.09 from binary installers, then proceeded with the same steps that I mentioned earlier. And I get my same behaviour: I kill the BOSH room occupant, and the other occupants notice after 37 seconds, when the session timeouts: `2020-01-22 13:55:58.641 [info] <0.484.0>@ejabberd_http_bind:handle_info:508 Session timeout. Closing the HTTP bind session: <<"021df9e4867e9d2aeffa469f8ee6f08f197ea73e">>` – Badlop Jan 22 '20 at 12:58
  • By "expected behavior", I meant that 16.09 does _not_ exhibit the same problem as 19.09.1. Sorry for the confusion. I'm going to do more version-by-version testing to narrow down the root cause and hopefully allow you to reproduce what I'm experiencing. – Jonathan Moore Jan 23 '20 at 14:23