Uploaded image for project: 'ActiveMQ Classic'
  1. ActiveMQ Classic
  2. AMQ-6926

JDBC persistence is silently falling back to KahaDB, if an error occurs

VotersWatch issueWatchersLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Not A Problem
    • 5.15.3
    • None
    • Broker, JDBC
    • None
    • ActiveMQ 5.15.3

      Wildfly 10.1.0

    Description

      We are using ActiveMQ 5.15.3 with Wildfly 10.1.0.

      ActiveMQ was integrated as resource adapter as
      instructed here:

      http://www.mastertheboss.com/jboss-server/jboss-jms/integrate-activemq-with-wildfly

      In general it is working fine, but we have found an issue with JDBC
      persistence.

      If the password of our db user is wrong, wildfly logs that error, but
      deployment of active.rar succeed and afterwards it is using KahaDB. So
      nobody will recognize, that messages gets persisted to file system instead
      of database.

      The issue only occurs, if the VM connector is used.

      Resource adapter was configured as follows:

       

              <subsystem xmlns="urn:jboss:domain:resource-adapters:4.0">
                  <resource-adapters>
                      <resource-adapter id="activemq">
                          <archive>
                              activemq.rar
                          </archive>
                          <transaction-support>XATransaction</transaction-support>
                          <config-property name="ServerUrl">
                              vm://localhost
                          </config-property>
                          <config-property name="UserName">
                              admin
                          </config-property>
                          <config-property name="UseInboundSession">
                              false
                          </config-property>
                          <config-property name="BrokerXmlConfig">
                              xbean:broker-config.xml
                          </config-property>
                          <config-property name="Password">
                              xxxxx
                          </config-property>
                          <connection-definitions>
                              <connection-definition class-name="org.apache.activemq.ra.ActiveMQManagedConnectionFactory" jndi-name="java:/JmsXA" enabled="true" pool-name="ConnectionFactory">
                                  <xa-pool>
                                      <min-pool-size>1</min-pool-size>
                                      <max-pool-size>20</max-pool-size>
                                      <prefill>false</prefill>
                                      <is-same-rm-override>false</is-same-rm-override>
                                  </xa-pool>
                                  <recovery no-recovery="false">
                                      <recover-credential>
                                          <user-name>recovery-user</user-name>
                                          <password>secret</password>
                                      </recover-credential>
                                  </recovery>
                              </connection-definition>
                          </connection-definitions>
                          <admin-objects>
                              <admin-object class-name="org.apache.activemq.command.ActiveMQQueue" jndi-name="java:/app/jms/mySyncAppQueue" use-java-context="true" pool-name="mySyncAppQueue">
                                  <config-property name="PhysicalName">
                                      app/jms/mySyncAppQueue
                                  </config-property>
                              </admin-object>
                              <admin-object class-name="org.apache.activemq.command.ActiveMQQueue" jndi-name="java:/app/jms/myAsyncQueue" use-java-context="true" pool-name="myAsyncQueue">
                                  <config-property name="PhysicalName">
                                      app/jms/myAsyncQueue
                                  </config-property>
                              </admin-object>
                              <admin-object class-name="org.apache.activemq.command.ActiveMQQueue" jndi-name="java:/app/jms/mySyncContainerQueue" use-java-context="true" pool-name="mySyncContainerQueue">
                                  <config-property name="PhysicalName">
                                      app/jms/mySyncContainerQueue
                                  </config-property>
                              </admin-object>
                              <admin-object class-name="org.apache.activemq.command.ActiveMQQueue" jndi-name="java:/app/jms/classicQueue" use-java-context="true" pool-name="classicQueue">
                                  <config-property name="PhysicalName">
                                      app/jms/classicQueue
                                  </config-property>
                              </admin-object>
                          </admin-objects>
                      </resource-adapter>
                  </resource-adapters>
              </subsystem>
      

      We are now using the tcp connector to workaround. In case that password
      is wrong, no broker is available and incoming requests are refused as expected.

      Additional we will use a startup Singleton bean to check, if broker is available, so
      that startup fails, if there is something wrong.

      Attachments

        1. broker-config.xml
          3 kB
          Clemens
        2. ra.xml
          13 kB
          Clemens

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned
            Harder Clemens
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment