git clone git@github.com:gentics/mesh.git
cd mesh
git checkout master
export JAVA_HOME=<PATH-TO-YOUR-JAVA11>
mvn clean package -DskipTests -Dskip.unit.tests -Dskip.performance.tests -Dskip.cluster.tests
This document is for developers which are interested in contributing to the Gentics Mesh project. It aims to provide you with a big picture and starting points to get your bearings within the Gentics Mesh codebase.
A good source for tasks to contribute to is the quick wins list. These tasks have been labelled to indicate that the implementation is fairly easy.
The contribution format is described in the CONTRIBUTING.md
In order for any contributions to be accepted you must sign our Contributor License Agreement.
The purpose of this agreement is to protect users of this codebase by ensuring that all code is free to use under the stipulations of the Apache2 license.
Make sure that you use at least Eclipse Neon.
Install the following Maven m2e workshop plugins:
m2e-apt-plugin
Make sure that your Eclipse Maven APT settings are set to "Automatically configure JDT APT". If you don’t find this option, you most likely need to install the M2E APT Plugin for eclipse. |
Import all Maven modules in your IDE.
Please note that this project is using Google Dagger for dependency injection.
Adding new dependencies or beans may require a fresh build (via Project→Clean) of the mesh-core
and mesh-api
modules.
Import the project and select the pom.xml
under the mesh
folder.
To run the server from IntelliJ:
Create an Application in run configuration.
In the Main Class
field, put com.gentics.mesh.server.ServerRunner
.
In the Use classpath of module
field, select the mesh-server
module.
In the Working directory
field, choose an empty folder.
Build the project by executing the maven command below before starting up mesh.
You need to manually set the Working directory in your Run/Debug Configuration to an empty folder if you run the ServerRunner or DemoRunner application. Otherwise Gentics Mesh will not be able to start the embedded Elasticsearch service.
|
You can build Gentics Mesh locally using Apache Maven.
Running all test locally is not recommended since the execution time is very high. Most tests require a so-called test context provider to run against, which includes a docker environment to run a test database and some other integration components.+ |
git clone git@github.com:gentics/mesh.git
cd mesh
git checkout master
export JAVA_HOME=<PATH-TO-YOUR-JAVA11>
mvn clean package -DskipTests -Dskip.unit.tests -Dskip.performance.tests -Dskip.cluster.tests
The master branch should be used as the branch to be build since the dev branch can be unstable.
The main components which are used to build Gentics Mesh are:
Component | Usage |
---|---|
Provides HTTP server, Authentication, Upload handling, Eventbus and request routing. |
|
Dependency injection library |
|
Library which is used to composing asynchronous requests/processes. |
|
Object relations mapping. |
Since you are most likely already familiar with the Gentics Mesh REST API I assume it is best to start there.
We’ll start on how the REST API is setup and continue how requests are handled.
All REST API endpoints are provided by the RestAPIVerticle
which as the name suggests is a Vert.x Verticle.
Verticles are deployment units which are registered by Vert.x and contain application code. Gentics Mesh only uses a few verticles one of which is the RestAPIVerticle. Verticles are not used to modularize or extend the REST API. |
The RestAPIVerticle
will setup the actual Http server which accepts the requests and use Vert.x Routers to process the Http request and direct it to the registered endpoints.
The RouterStorage
is the main class which manages all the REST API routes. A storage will be assigned to each RestAPIVerticle
instance.
The storage is used organize routes by its purpose and to also make routes re-usable.
There are for example core routes (e.g. /api/v2/users) and project specific routes (e.g. /api/v2/:projectName/nodes).
The RestAPIVerticle
pulls Endpoints from various Endpoint classes like UserEndpoint
, RoleEndpoint
, NodeEndpoint
.
Each of those Endpoint classes will be assigned a dedicated router to which the EndpointRoutes
can be registered.
These in turn handle the actual Http request.
How a request is being processed will be described in the next section.
Lets follow the following request:
GET /api/v2/demo/nodes/df8beb3922c94ea28beb3922c94ea2f6
The request is accepted by the HttpServer
request handler and directed to the RootRouter
(/).
This router which is part of the RouterStorage
will direct the request to the APIRouter
(/api/v2).
Next the request is routed to the ProjectRouter
(/api/v2/demo/) During this step the reference to the demo
project is loaded and added to the RoutingContext
for later use.
After that to the Router
which was assigned to the NodeEndpoint
instance (/api/v2/demo/nodes/).
Finally the request is being directed to the Route
which matches the remaining path and Http method. (GET /api/v2/demo/nodes/df8beb3922c94ea28beb3922c94ea2f6)
Each class for REST Endpoint (e.g. NodeEndpoint) also usually has a dedicated CRUDHandler which provides the actual code which processes the request.
For the NodeEndpoint this would be the NodeCrudHandler
. The NodeCrudHandler#handleRead
method accepts the request and processes it.
The CRUDHandler loads the data object which is used to aggregate the elements.
In order to load nodes the RootVertex
would be the NodeRoot
which is connected to the Project
vertex which has been loaded before within the ProjectRouter.
Next the selected element will be:
loaded using the NodeRoot
and the given uuid
checked against needed permissions
transformed to JSON via the Node#transformToRestSync
method
We already hinted that Projects and Nodes are vertices. In fact all elements in Gentics Mesh are models within a Graph Model. The graph has a root element which is used as an entry point for Gentics Mesh. During startup this vertex will be loaded and all further interacts will use this vertex to load more and more of the graph. References to some of these vertices will be kept in memory to speed things up.
The graph structure is documented within this interactive graph gist.
The gist may be a bit outdated in some places but the general structure is still valid. |
Gentics Mesh uses SQL database to store its data, through the object relation mapping library. The default RDBMS for use is MariaDB.
The single remote database is used for both single instance and clustered modes.
Name | Description |
---|---|
mesh-api |
Contains API classes like Configuration POJOs and constants. |
mesh-core |
Contains the data model and the main codebase |
mesh-database-connectors |
Contains the database connectors code |
mesh-demo |
Contains the Gentics Demo which can be run via the |
mesh-rest-client |
Contains the Vert.x based REST client. |
mesh-rest-model |
Contains the POJOs for the REST API models. |
mesh-doc |
Contains sources for the getmesh.io documentation and tools to generate tables and examples from sources. |
mesh-server |
Contains the Gentics Mesh server which can be run via |
mesh-mdm |
Contains an abstraction over multi-database management API (aka Multi Database Mesh). |
mesh-mdm-hibernate |
Contains the ORM related abstractions, in separate API and implementation submodules. |
mesh-distributed |
Contains code which take care of event handling and event processing in an cluster environment. |
mesh-service-local-storage |
Contains code for the binary storage system which stores data locally on disk. |
mesh-graphql |
Contains code for the GraphQL endpoint and GraphQL types. |
mesh-service-image-imgscalr |
Contains an image resizer implementation based on imgscalr. |
mesh-performance-tests |
Contains dedicated performance tests. |
mesh-common |
Contains common classes and interfaces which are shared among internal maven modules. |
mesh-elasticsearch |
Contains classes needed for the Elasticsearch integration. |
mesh-integration-tests |
Contains integration tests for Gentics Mesh and the UI. |
mesh-test-common |
Contains classes which provide e.g. testcontainer testrules to make it easy to setup integration tests. |
Understanding the startup sequence of Gentics Mesh helps also to get an idea of the components involved.
Location | Description |
---|---|
|
Load the Mesh options and run mesh via |
|
Use the Mesh factory to get the |
|
Initialize dagger context via |
|
Setup the dagger context using the |
|
Initialize the database, setup mandatory (admin role, user, group) data. |
|
Setup routes for project endpoints and invoke |
|
Load verticles (e.g. |
Deploying the verticles will start the REST API Http server and Mesh is ready to be used.
Elasticsearch (ES) stores searchable documents in a flat format since ES is not able to handle relationships to other documents. The AbstractIndexHandler implementations flatten mesh elements to the ES document format in order to provide the Search Models.
The node search model document contains also tags for the node. It is mandatory to update the node document when one of the referenced tags is renamed, removed or even when a new tag is added. This pattern applies to various elements and actions within mesh. Every CRUD operation may also provide a search queue batch (SQB) which contains the information what ES documents need to be updated, removed or added. The SQB is persisted within the transaction and is only stored when it succeeds.
The SQB is directly processed after the modifying transaction has been committed.
The MeshAuthProvider
is used to authenticate the user credentials. The MeshAuthHandler
is using this provider in order to authenticate the user.
Instead of Vert.x’s User.isAuthorised the UserImpl#hasPermission
methods must be used since Vert.x’s authorization code is not compatible with document level permission systems that use objects instead of string to validate permissions.
The HttpStatusCodeErrorException should be used whenever an exception needs to be thrown/returned. Static methods for constructor calls can be used. It is not required to manually translate the exception message. Instead exceptions of this type will automatically be translated if possible. This way only an i18n key needs to be set for the message.
The RouterStorage contains the last failure handler that catches all exceptions which have not yet been handled.
Transactions can be started using the currently registered Database provider class.
Method | Description |
---|---|
noTrx(TrxHandler<T> txHandler) |
Autocommit transaction. This method should only be used for read only operations. (Blocking) |
asyncTrx() |
Regular async transaction. (non-blocking) |
trx(TrxHandler<T> txHandler) |
Regular transaction. (blocking) |
Transactions should not be nested. Nesting transactions will just result in the inner transaction to utilize the previously opened outer transaction. |
Each of the endpoints has one or more JUnit
test classes which test the routes. (e.g. NodeEndpointTest, UserEndpointTest). The run on the tests almost always requires a test context provides, which contains the corresponding database connector, and a Docker instruction set for the dedicated database. One has to set the JAVA_HOME
environment up correctly as well, since it is used in the internal tooling.
An example of running a distinct GraphQL endpoint test against the HSQL database:
mvn -fae -U -Dmaven.javadoc.skip=true -Dskip.cluster.tests=true -Dmaven.test.failure.ignore=true -Dmesh.container.image.prefix=docker.gentics.com/ -B -e -pl '!doc,!performance-tests' test -Dtest=com.gentics.mesh.core.graphql.GraphQLEndpointTest -DfailIfNoTests=false -Djacoco.skip=true -Dskip.hsqlmemory.tests=false -Dmesh.testdb.manager.host=localhost -Dmesh.testdb.manager.port=8080
Same for the MariaDB. Please note that neither the Mesh Database Connector not the database itself should be set up in advance, everything is done by the test context provider:
mvn -fae -U -Dmaven.javadoc.skip=true -Dskip.cluster.tests=true -Dmaven.test.failure.ignore=true -Dmesh.container.image.prefix=docker.gentics.com/ -B -e -pl '!doc,!performance-tests' test -Dtest=com.gentics.mesh.core.graphql.GraphQLEndpointTest -DfailIfNoTests=false -Djacoco.skip=true -Dskip.mariadb.tests=false -Dmesh.testdb.manager.host=localhost -Dmesh.testdb.manager.port=8080
Avoid wrapping transactions in your tests around code which invokes REST calls. Otherwise you may not be able assert the changes made by REST calls since the transaction still references the old data. |
Additional to Mockito
and JUnit
the AssertJ
tool is used to create fluent readable custom assertions.
The MeshAssertions`
class should be used to add new custom assertions.
Currently most of the dagger dependencies can be accessed via MeshInternal().get()
The short form for the inpatient:
RestAPIVerticle
contains all EndpointRouters
NodeEndpoint
contains the routes for /api/v2/:projectName/nodes
Elements in Mesh have each dedicated classes which directly represent the data (e.g. HibNodeImpl
, HibUserImpl
)
Endpoint classes like NodeEndpoint
also have a CRUD class (e.g. NodeCrudHandler
)