7.1. Setting up eXo Platform cluster

  1. Install eXo Platform package by following Installation and Startup.

    If you are using eXo Chat addon, you should install it in all the cluster nodes.

  2. Create a copy of the package for each cluster node. Assume that you have two nodes: node1.your-domain.com and node2.your-domain.com.

    Note

    For testing or troubleshooting context, in case you are using Tomcat as application server and if you will run the cluster nodes in the same environment (same Operating System), you should configure different Tomcat ports.

  3. Configure the RDBMS datasources in each cluster node (follow this documentation) to use one of the supported database systems: Postgres, MySQL, MSSQL, Oracle, MariaDB.

    Note

    • It is not possible to use the default configured hsql embedded database as noted in Configuring eXo Platform with database.

    • The different cluster nodes must use the same RDBMS datasources.

  4. eXo Platform comes with Elasticsearch embedded. For clustering, you MUST use a seperate Elasticsearch process. Please follow the steps described here.

  5. eXo Platform uses databases and a disk folders to store its data:

    • Datasources:

      • IDM: datasource to store user/group/membership entities.

      • JCR: datasource to store JCR Data.

      • JPA: datasource to store entities mapped by Hibernate. Quartz tables are stored in this datasource by default.

    • Disk:

      • File storage data: Stored by default under a file system folder and could be configured to store files in JPA datasource instead. More details here.

        If the file system storage implementation is configured, the folder must be shared between all cluster nodes.

        The folder location can be configured by using this property exo.files.storage.dir=/exo-shared-folder-example/files/. It is possible to modify it through exo.properties file.

      • JCR Binary Value Storage: Stored by default under a file system folder and could be configured to store files in JCR datasource instead. More details here.

        If the file system storage implementation is configured, the folder must be shared between all cluster nodes.

        The folder location can be configured by using this property exo.jcr.storage.data.dir=/exo-shared-folder-example/jcrvalues/. It is possible to modify it through exo.properties file.

        Tip

        Choosing file system or RDBMS storage depens on your needs and your system environment.(See more details in Comparing file system and RDBMS storage.

      • JCR indexes: Stored under a local file system folder in each cluster node. More details here.

        eXo Platform uses by default local JCR indexes and this is the recommended mode for clustering. In fact read and write operations take less time in local mode than in shared mode.

    • Other systems: Such as MongoDB if eXo Chat addon is installed.

  6. Configure exo.cluster.node.name property. Use a different name for each node:

    • In JBoss, edit this property in the standalone-exo-cluster.xml file:

      
      
                                  <system-properties>
                                  <property name="exo.cluster.node.name" value="node1"/>
                                  </system-properties>
                          
    • In Tomcat, add the property in setenv-customize.sh (.bat for windows environments):

      • For windows:

        SET "CATALINA_OPTS=%CATALINA_OPTS% -Dexo.cluster.node.name=node1"
      • For Linux:

        CATALINA_OPTS="${CATALINA_OPTS} -Dexo.cluster.node.name=node1"
  7. eXo Platform uses UDP protocol by default for JGroups. This protocol is not recommended for production environements, you need to configure TCP as transport protocol instead. For that purpose, please follow this documentation.

  8. Configure CometD Oort URL. Replace localhost in the following examples with the IP or host name of the node.

    • In JBoss, edit standalone-exo-cluster.xml:

      
      <property name="exo.cometd.oort.url" value="http://localhost:8080/cometd/cometd"/>
    • In Tomcat, edit exo.properties:

      exo.cometd.oort.url=http://localhost:8080/cometd/cometd

    CometD is used to perform messaging over the web, and Oort is a CometD extension that supports clustering. The configuration is necessary to make the On-site Notification work properly.

  9. Configure CometD group port. This step is optional.

    CometD Oort nodes will automatically join others in the same network and the same group, so to prevent stranger nodes from joining your group, you might specify your group with a port that is different from the default port (5577). The situation is likely to happen in a testing environment.

    • In JBoss, edit standalone-exo-cluster.xml file:

      
      <!-- Configure the same port for all nodes in your cluster -->
      <property name="exo.cometd.oort.multicast.groupPort" value="5579"/>
    • In Tomcat, edit exo.properties file:

      # Configure the same port for all nodes in your cluster
      exo.cometd.oort.multicast.groupPort=5579
  10. The above last step is applicable when multicast is available on the system where CometD is deployed. Otherwise, the static discovery mechanism should be used by adding the following properties in exo.properties file:

    
    
    exo.cometd.oort.configType=static
    exo.cometd.oort.cloud=http://host2:port2/cometd/cometd,http://host3:port3/cometd/cometd
    • The default value for exo.cometd.oort.configType is "multicast", and only the two values "multicast" and "static" are available.

    • The parameter exo.cometd.oort.cloud must contain a comma-separated list of the Cometd endpoint of all the other nodes of the cluster. So in the example above, we assume that the node of this exo.properties is host1:port1, and that the cluster is composed of three nodes : host1, host2 and host3.

  11. Only in Tomcat, configure the following:

    • In setenv-customize.sh (.bat for Windows):

      EXO_PROFILES="all,cluster"
    • In exo.properties:

      gatein.jcr.config.type=cluster
      gatein.jcr.index.changefilterclass=org.exoplatform.services.jcr.impl.core.query.ispn.LocalIndexChangesFilter
      # Default JCR indexing is local so you need to use a different folder for each node.
      # With the value below, you do not have to create the folder.
      exo.jcr.index.data.dir=gatein/data/jcr/index
  12. Start the servers. You must wait until node1 is fully started, then start node2.

    In JBoss, you need to indicate the configuration file with -c option: ./bin/standalone.sh -b 0.0.0.0 -c standalone-exo-cluster.xml (.bat for Windows).

    Only in JBoss, some other options that you can use in the start command:

    • -Dexo.cluster.node.name=a-node-name overrides the node name in the configuration file.

    • -Djboss.socket.binding.port-offset=101

      This is useful in case you set up nodes in the same machine for testing. You will not need to configure the port for every node. Just use a different port-offset in each start command.

Note

If you run two nodes in the same machine for testing, change the default ports of node2 to avoid port conflict.

In Tomcat, ports are configured in conf/server.xml.

In JBoss, use -Djboss.socket.binding.port-offset option mentioned above.

To configure a front-end for your nodes, follow Setting up Apache front-end.

To configure load balancing, follow Setting up mod_jk.

Copyright ©. All rights reserved. eXo Platform SAS
blog comments powered byDisqus