1

While testing Apache ActiveMQ Artemis 2.13.0 with JGroups and the KUBE_PING plugin on Kubernetes I have noticed that no more than two brokers form a cluster. Any other brokers are simply ignored, whether they are started at the same time or later.

Once brokers 1 and 2 have established a bridge I see this message:

2020-06-17 13:11:51,416 INFO  [org.apache.activemq.artemis.core.server] AMQ221027: Bridge ClusterConnectionBridge@320cdb5b [name=$.artemis.internal.sf.artemis-cluster.19db4aef-b09c-11ea-924c-02276c4ed44d, queue=QueueImpl[name=$.artemis.internal.sf.artemis-cluster.19db4aef-b09c-11ea-924c-02276c4ed44d, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=094f73d3-b09c-11ea-842e-b65403ac65ce], temp=false]@1a62eadd targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@320cdb5b [name=$.artemis.internal.sf.artemis-cluster.19db4aef-b09c-11ea-924c-02276c4ed44d, queue=QueueImpl[name=$.artemis.internal.sf.artemis-cluster.19db4aef-b09c-11ea-924c-02276c4ed44d, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=094f73d3-b09c-11ea-842e-b65403ac65ce], temp=false]@1a62eadd targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61618&host=0-0-0-0], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1466558382[nodeUUID=094f73d3-b09c-11ea-842e-b65403ac65ce, connector=TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61618&host=0-0-0-0, address=, server=ActiveMQServerImpl::serverUUID=094f73d3-b09c-11ea-842e-b65403ac65ce])) [initialConnectors=[TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61618&host=0-0-0-0], discoveryGroupConfiguration=null]] is connected
2020-06-17 13:12:31,127 WARNING [org.jgroups.protocols.pbcast.FLUSH] asa-activemq-artemis-primary-0-51182: waiting for UNBLOCK timed out after 2000 ms

Any other broker started does not join the cluster. However, when adding <TRACE/> to the jgroups.xml file I can see all the nodes talking. So it looks like the discovery process for new brokers simply stops once a second broker joined. Also worth mentioning is that it looks like brokers 3 and 4 build their own cluster.

Configuration (broker.xml):

    <connectors>
      <connector name="netty-connector">tcp://0.0.0.0:61618</connector>
    </connectors>

    <acceptors>
      <acceptor name="netty-acceptor">tcp://0.0.0.0:61618</acceptor>
    </acceptors>

    <broadcast-groups>
      <broadcast-group name="cluster-broadcast-group">
        <broadcast-period>5000</broadcast-period>
        <jgroups-file>jgroups.xml</jgroups-file>
        <jgroups-channel>active_broadcast_channel</jgroups-channel>
        <connector-ref>netty-connector</connector-ref>
      </broadcast-group>
    </broadcast-groups>

    <discovery-groups>
      <discovery-group name="cluster-discovery-group">
        <jgroups-file>jgroups.xml</jgroups-file>
        <jgroups-channel>active_broadcast_channel</jgroups-channel>
        <refresh-timeout>10000</refresh-timeout>
      </discovery-group>
    </discovery-groups>

    <cluster-connections>
      <cluster-connection name="artemis-cluster">
        <connector-ref>netty-connector</connector-ref>
        <retry-interval>500</retry-interval>
        <use-duplicate-detection>true</use-duplicate-detection>
        <message-load-balancing>STRICT</message-load-balancing>
        <!-- <address>jms</address> -->
        <max-hops>1</max-hops>
        <discovery-group-ref discovery-group-name="cluster-discovery-group"/>
        <!-- <forward-when-no-consumers>true</forward-when-no-consumers> -->
      </cluster-connection>
    </cluster-connections>

Note that I am using the Docker image vromero/activemq-artemis:2.13.0 which merges the config shown above into the main broker.xml.

jgroups.xml:

<config xmlns="urn:org:jgroups"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.0.xsd">

  <TCP
    enable_diagnostics="true"
    bind_addr="match-interface:eth0,lo"
    bind_port="7800"
    recv_buf_size="20000000"
    send_buf_size="640000"
    max_bundle_size="64000"
    max_bundle_timeout="30"
    sock_conn_timeout="300"

    thread_pool.enabled="true"
    thread_pool.min_threads="1"
    thread_pool.max_threads="10"
    thread_pool.keep_alive_time="5000"
    thread_pool.queue_enabled="false"
    thread_pool.queue_max_size="100"
    thread_pool.rejection_policy="run"

    oob_thread_pool.enabled="true"
    oob_thread_pool.min_threads="1"
    oob_thread_pool.max_threads="8"
    oob_thread_pool.keep_alive_time="5000"
    oob_thread_pool.queue_enabled="true"
    oob_thread_pool.queue_max_size="100"
    oob_thread_pool.rejection_policy="run"
  />

  <!-- <TRACE/> -->

  <org.jgroups.protocols.kubernetes.KUBE_PING
    namespace="${KUBERNETES_NAMESPACE:default}"
    labels="${KUBERNETES_LABELS:cluster=activemq-artemis-asa}"
  />

  <MERGE3 min_interval="10000" max_interval="30000"/>
  <FD_SOCK/>
  <FD timeout="10000" max_tries="5" />
  <VERIFY_SUSPECT timeout="1500" />
  <BARRIER />
  <pbcast.NAKACK use_mcast_xmit="false" retransmit_timeout="300,600,1200,2400,4800" discard_delivered_msgs="true"/>
  <UNICAST3
    xmit_table_num_rows="100"
    xmit_table_msgs_per_row="1000"
    xmit_table_max_compaction_time="30000"
  />
  <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000"/>
  <pbcast.GMS print_local_addr="true" join_timeout="3000" view_bundling="true"/>
  <FC max_credits="2000000" min_threshold="0.10"/>
  <FRAG2 frag_size="60000" />
  <pbcast.STATE_TRANSFER/>
  <pbcast.FLUSH timeout="0"/>

</config>

What is missing to dynamically discover more than two nodes? Is there any parameter that prevents stopping the discovery/joining process?

Stephan
  • 245
  • 1
  • 7
  • I am working on the helm chart from vromero at the moment and would like to include jgroups as an option going forward. Could you share your steps with me a little more in detail? I would be keen to merge your experience into the helm chart to make it easier to use in future. – Namphibian Dec 06 '20 at 23:29

1 Answers1

1

Okay, the connector configuration is key to success:

<connectors>
    <connector name="netty-connector">tcp://0.0.0.0:61618</connector>
</connectors>

Other nodes cannot connect to 0.0.0.0. Once changed to something accessible, the cluster forms. Inside Kubernetes, using the hostname is tricky so I changed it to

<connectors>
    <connector name="netty-connector">tcp://${ipv4addr:localhost}:61618</connector>
</connectors>

Of course, property ipv4addr needs to be injected somehow. For me, setting JAVA_OPTS to -Dipv4addr=$(hostname -i) did the trick.

Up and downscaling of an Artemis cluster now works as expected.

Stephan
  • 245
  • 1
  • 7