diff --git a/CODEOWNERS b/CODEOWNERS index 85041e3ebe..fa2954fd2e 100644 --- a/CODEOWNERS +++ b/CODEOWNERS @@ -1,2 +1,2 @@ #GUSINFO:MS CX Engineering,MS CX (DOCS) -* @dukesphere @mulesoft/team-docs +* @dmerlob @mulesoft/team-docs diff --git a/antora.yml b/antora.yml index 51ebc744b1..86319309cb 100644 --- a/antora.yml +++ b/antora.yml @@ -4,3 +4,6 @@ version: '4.4' display_version: '4.4' nav: - modules/ROOT/nav.adoc +asciidoc: + attributes: + supportStatus: extendedSupportVersion diff --git a/modules/ROOT/assets/image-source-files/runtime-http-connections-diagram.graffle b/modules/ROOT/assets/image-source-files/runtime-http-connections-diagram.graffle new file mode 100644 index 0000000000..f6704cb475 Binary files /dev/null and b/modules/ROOT/assets/image-source-files/runtime-http-connections-diagram.graffle differ diff --git a/modules/ROOT/assets/images/runtime-http-connections-diagram.png b/modules/ROOT/assets/images/runtime-http-connections-diagram.png new file mode 100644 index 0000000000..579b6c3e4d Binary files /dev/null and b/modules/ROOT/assets/images/runtime-http-connections-diagram.png differ diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index 50f66d42de..c63fd0bd25 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -1,6 +1,6 @@ -.xref:index.adoc[Mule Overview] +.xref:index.adoc[Mule Runtime] * xref:whats-new-in-mule.adoc[What's New in Mule] -* xref:index.adoc[Mule Overview] +* xref:index.adoc[Overview] ** xref:mule-components.adoc[Mule Components] ** xref:about-flows.adoc[Flows and Subflows] ** xref:about-mule-configuration.adoc[Mule Configuration File] @@ -31,7 +31,7 @@ include::partial$nav-app-dev.adoc[] *** xref:tuning-backend-server.adoc[Backend Server Response Time] *** xref:tuning-caching.adoc[Caching] *** xref:tuning-pooling-profiles.adoc[Pooling Profiles] - *** xref:tuning-domains.adoc[Domains] + *** xref:shared-resources.adoc[Domains] *** xref:tuning-logging.adoc[Logging] *** xref:tuning-batch-processing.adoc[Batch Processing] *** xref:tuning-app-design.adoc[App Design] @@ -55,12 +55,6 @@ include::partial$nav-app-dev.adoc[] ** xref:maven-reference.adoc[Maven Reference] * xref:securing.adoc[Security] ** xref:secure-configuration-properties.adoc[Secure Configuration Properties] - ** xref:cryptography.adoc[Cryptography Module] - *** xref:cryptography-pgp.adoc[PGP] - *** xref:cryptography-xml.adoc[XML] - *** xref:cryptography-jce.adoc[JCE] - *** xref:cryptography-reference.adoc[General Operations] - *** xref:cryptography-troubleshooting.adoc[Troubleshoot Cryptography Module] ** xref:fips-140-2-compliance-support.adoc[FIPS 140-2 Compliance Support] ** xref:setting-up-ldap-provider-for-spring-security.adoc[Configure LDAP Provider for Spring Security] ** xref:component-authorization-using-spring-security.adoc[Component Authorization Using Spring Security] diff --git a/modules/ROOT/pages/_partials/mmp-deploy-to-cloudhub-2.adoc b/modules/ROOT/pages/_partials/mmp-deploy-to-cloudhub-2.adoc index 050ff8b41f..7d8589b680 100644 --- a/modules/ROOT/pages/_partials/mmp-deploy-to-cloudhub-2.adoc +++ b/modules/ROOT/pages/_partials/mmp-deploy-to-cloudhub-2.adoc @@ -87,6 +87,27 @@ From the command line in your project's folder, package the application and exec mvn clean deploy -DmuleDeploy ---- +To deploy the artifact without rebuilding it, run: + +[source,bash,linenums] +---- +mvn mule:deploy +---- + +=== Exchange Snapshot Assets + +You can also deploy Exchange snapshot assets into CloudHub 2.0. + +By using `SNAPSHOT` version assets in Anypoint Exchange during the development and testing phase, you can avoid incrementing your application's version number for small changes. After your `SNAPSHOT` version application has been overwritten in Anypoint Exchange, you can redeploy your `SNAPSHOT` version application to CloudHub 2.0 via the Mule Maven plugin to deploy the latest changes. + +To learn more about publishing snapshot assets to Anypoint Exchange, see xref:exchange::to-publish-assets-maven.adoc#asset-lifecycle-state[Asset Lifecycle State]. + +[NOTE] +==== +Each time you update your application's snapshot, redeploy the application to refresh it with the latest snapshot binaries. +Because snapshot assets can change after deployment, avoid deploying them into your production environment. +==== + == Redeploy to CloudHub 2.0 To redeploy the application, run the same command as you did to deploy. + @@ -126,7 +147,7 @@ Example values: `4.3.0`, `4.2.2-hf4` | Yes !=== .2+! `scopeLoggingConfiguration` ! `scope` ! The package of the logging library to use. -! `logLevel` ! The log level. Accepted values: `NONE`, `ERROR`, `WARN`, `INFO`, `DEBUG`, `TRACE`. +! `logLevel` ! The log level. Accepted values: `INFO`, `DEBUG`, `WARN`, `ERROR`, `FATAL`. !=== Configuration example: @@ -145,8 +166,8 @@ Configuration example: ---- | No | `target` | The CloudHub 2.0 target name to deploy the app to. + -Specify either a shared space or a private space available in your Deployment Target values in CloudHub 2.0. See xref:cloudhub-2::ch2-features.adoc[Features of CloudHub 2.0] for a detailed description on shared and private spaces. Use a value from the xref:cloudhub-2::ch2-architecture.adoc#regions-and-dns-records[list of regions].| Yes -| `provider` | Set to `MC`, for CloudHub 2.0. | Yes +Specify either a shared space or a private space available in your Deployment Target values in CloudHub 2.0. See xref:cloudhub-2::ch2-features.adoc[Features of CloudHub 2.0] for a detailed description on shared and private spaces. Use a target name value from the xref:cloudhub-2::ch2-architecture.adoc#regions-and-dns-records[list of regions]. For example, `Cloudhub-US-East-1`. | Yes +| `provider` | Provider MC (MuleSoft Control Plane) indicates that the deployment is managed through Anypoint Runtime Manager. Set to `MC` for CloudHub 2.0. | Yes | `environment` | Target Anypoint Platform environment. + This value must match an environment configured in your Anypoint Platform account, as shown here: + [source,xml,linenums] @@ -158,7 +179,7 @@ This value must match an environment configured in your Anypoint Platform accoun | `vCores` | The size of each replica specified in vCores. Accepted values: `0.1`, `0.2`, `0.5`, `1`, `1.5`, `2`, `2.5`, `3`, `3.5`, `4`. See xref:cloudhub-2::ch2-architecture.adoc#cloudhub-2-replicas[CloudHub 2.0 Replicas] for a detailed description of available vCore sizes and their assigned hardware resources. -| No +| Yes (Only when `instanceType` isn't configured.) include::mule-runtime::partial$mmp-concept.adoc[tag=businessGroupParameterDescription] include::mule-runtime::partial$mmp-concept.adoc[tag=businessGroupIdParameterDescription] include::mule-runtime::partial$mmp-concept.adoc[tag=deploymentTimeoutParameterDescription] @@ -204,7 +225,8 @@ Configuration example: [%header%autowidth.spread,cols=".^a,.^a"] |=== |Parameter | Description -| `enforceDeployingReplicasAcrossNodes` | Enforces the deployment of replicas across different nodes. The default value is `false`. +| `enforceDeployingReplicasAcrossNodes` | Enforces the deployment of replicas across different nodes. The default value is `false`. + +For high availability, set this value to `true`. Configuration example: [source,xml,linenums] @@ -249,8 +271,8 @@ Configuration example: [cols=".^1,.^1,.^3"] !=== .3+! `inbound` -// ! `pathRewrite` ! TBC. - ! `publicURL` ! URL of the deployed application. + ! `publicURL` ! URL of the deployed application. You can add multiple comma-separated values. + ! `pathRewrite` ! Supplies the base path expected by the HTTP listener in your application. This value must begin with `/`. This parameter is used only for applications that are deployed to xref:cloudhub-2::ch2-private-space-about.adoc[private space]. ! `lastMileSecurity` ! Enable Last-Mile security to forward HTTPS connections to be decrypted by this application. This requires an SSL certificate to be included in the Mule application, and also requires more CPU resources. The default value is `false`. ! `forwardSslSession` ! Enables SSL forwarding during a session. The default value is `false`. !=== @@ -261,6 +283,7 @@ Configuration example: https://myapp.anypoint.com + /api true true diff --git a/modules/ROOT/pages/_partials/mmp-deploy-to-rtf.adoc b/modules/ROOT/pages/_partials/mmp-deploy-to-rtf.adoc index 922e679d96..05b0e2ab23 100644 --- a/modules/ROOT/pages/_partials/mmp-deploy-to-rtf.adoc +++ b/modules/ROOT/pages/_partials/mmp-deploy-to-rtf.adoc @@ -99,6 +99,20 @@ From the command line in your project's folder, package the application and exec mvn clean deploy -DmuleDeploy ---- +=== Exchange Snapshot Assets + +You can also deploy Exchange snapshot assets into Runtime Fabric. + +By using `SNAPSHOT` version assets in Anypoint Exchange during the development and testing phase, you can avoid incrementing your application's version number for small changes. After your `SNAPSHOT` version application has been overwritten in Anypoint Exchange, you can redeploy your `SNAPSHOT` version application to Runtime Fabric via the Mule Maven plugin to deploy the latest changes. + +To learn more about publishing snapshot assets to Anypoint Exchange, see xref:exchange::to-publish-assets-maven.adoc#asset-lifecycle-state[Asset Lifecycle State]. + +[NOTE] +==== +Each time you update your application's snapshot, redeploy the application to refresh it with the latest snapshot binaries. +Because snapshot assets can change after deployment, avoid deploying them into your production environment. +==== + == Redeploy to Runtime Fabric To redeploy the application, run the same command as you did to deploy. + @@ -129,11 +143,13 @@ If not set, defaults to +https://anypoint.mulesoft.com+. | No | `muleVersion` | The Mule runtime engine version to run in your Runtime Fabric instance. + Ensure that this value is equal to or higher than the earliest required Mule version of your application. + Example values: `4.3.0`, `4.2.2-hf4` | Yes +| `releaseChannel`| Set the name of the release channel used to select the Mule image. Supported values are `NONE`, `EDGE`, and `LTS`. By default, the value is set to `EDGE`. If the selected release channel doesn't exist, an error occurs. | No +| `javaVersion` | Set the Java version used in the deploy. Supported values are `8` and `17`. By default, the value is set to `8`. If the selected Java version doesn't exist, an error occurs. | No | `username` | Your Anypoint Platform username | Only when using Anypoint Platform credentials to login. | `password` | Your Anypoint Platform password | Only when using Anypoint Platform credentials to login. | `applicationName` | The application name displayed in Runtime Manager after the app deploys. | Yes | `target` | The Runtime Fabric target name where to deploy the app. | Yes -| `provider` | Set to `MC`, for Runtime Fabric. | Yes +| `provider` | Provider MC (MuleSoft Control Plane) indicates that the deployment is managed through Anypoint Runtime Manager. Set to `MC` for Runtime Fabric. | Yes | `environment` | Target Anypoint Platform environment. + This value must match an environment configured in your Anypoint Platform account, as shown here: + [source,xml,linenums] @@ -230,6 +246,7 @@ Configuration example: If a `reserved` configuration is present, ensure that this value is equal or higher. .2+! `memory` ! `reserved` ! Specifies the amount of memory to allocate for each application replica. The default value is 700 MB. + ! `limit` ! Specifies the maximum memory allocated per application replica. If a `reserved` configuration is present, ensure that this value is equal or higher. !=== Configuration example: [source,xml,linenums] @@ -299,6 +316,8 @@ Configuration example: | `generateDefaultPublicUrl` | When this parameter is set to true, Runtime Fabric generates a public URL for the deployed application. +| `disableAmLogForwarding` | Disables the application-level log forwarding to Anypoint Monitoring. By default, it is set to `false`. + |=== // end::rtfDeploymentSettingsReference[] diff --git a/modules/ROOT/pages/_partials/nav-app-dev.adoc b/modules/ROOT/pages/_partials/nav-app-dev.adoc index 8ec47a8827..e231e150d7 100644 --- a/modules/ROOT/pages/_partials/nav-app-dev.adoc +++ b/modules/ROOT/pages/_partials/nav-app-dev.adoc @@ -1,6 +1,6 @@ * xref:mule-app-dev.adoc[Develop Mule Applications] -** xref:mule-app-dev-hellomule.adoc[Hello Mule Tutorial] -** xref:mule-app-tutorial.adoc[Mule App Development Tutorial] +** xref:mule-app-dev-hellomule.adoc[Tutorial: Create a "Hello World" Mule app] +** xref:mule-app-tutorial.adoc[Tutorial: Create a Mule app that uses the Database Connector and DataWeave] ** xref:about-components.adoc[Core Components] *** xref:async-scope-reference.adoc[Async Scope] *** xref:batch-processing-concept.adoc[] @@ -53,6 +53,7 @@ *** xref:until-successful-scope.adoc[Until Successful Scope] ** xref:build-application-from-api.adoc[Build an Application from an API] ** xref:build-an-https-service.adoc[Build an HTTPS Service] +** xref:http-connection-handling.adoc[Understand HTTP Connection Handling During Mule Runtime Shutdown] ** xref:global-elements.adoc[Configure Global Elements] ** xref:global-settings-configuration.adoc[Configure Global Settings] ** xref:configuring-properties.adoc[Configure Properties] diff --git a/modules/ROOT/pages/_partials/upgrade-tool.adoc b/modules/ROOT/pages/_partials/upgrade-tool.adoc index 64c476f375..fc2e2bc1bf 100644 --- a/modules/ROOT/pages/_partials/upgrade-tool.adoc +++ b/modules/ROOT/pages/_partials/upgrade-tool.adoc @@ -5,6 +5,8 @@ // tag::BeforeYouBegin[] * xref:release-notes::mule-upgrade-tool/mule-upgrade-tool.adoc[The latest available version of Mule upgrade tool] so that the tool runs with the latest fixes and security enhancements. ++ +Download the Mule upgrade tool from the https://help.mulesoft.com/s/[Help Center^]. * A currently operational Mule 4 instance in _stopped_ status to prepare for the upgrade. + For upgrades of Mule versions between 4.1.1 and 4.1.4 with the Mule upgrade tool, you must upgrade from any patch update released after January 20, 2022. Releases of versions 4.1.1 through 4.1.4 _before_ January 20, 2022 are not supported by the tool, and attempts to upgrade them produce an error message stating that the Mule version cannot be upgraded without first upgrading to a supported version (see xref:release-notes::mule-runtime/upgrade-update-mule.adoc[]). @@ -15,6 +17,7 @@ The Mule upgrade tool requires the full distribution of the Mule runtime. Ensure Download Mule runtime distributions from the https://help.mulesoft.com/s/[Help Center^]. * At least 2 GB of available disk space on the file system and access privileges to install the new Mule distribution. * (For Windows environments) The execution policy for Powershell scripts set to *Unrestricted*. +* If Anypoint Monitoring agent is installed, uninstall it prior to the upgrade. // end::BeforeYouBegin[] // Upgrade Or Update Mule diff --git a/modules/ROOT/pages/about-classloading-isolation.adoc b/modules/ROOT/pages/about-classloading-isolation.adoc index 6582e88a8b..4e6b618421 100644 --- a/modules/ROOT/pages/about-classloading-isolation.adoc +++ b/modules/ROOT/pages/about-classloading-isolation.adoc @@ -124,7 +124,7 @@ All dependencies (JAR files, for example) declared in the application's `pom.xml Consider an application that uses Anypoint Connector for Java, and the connector needs to use a class that is part of a JAR dependency declared in the application's `pom.xml` file. However, this is not possible, because the connector's class loader is not able to find that class. To make this class visible to the connector, you must declare the dependency that contains the class as a shared library in the Mule Maven plugin configuration of your application's `pom.xml` file. -If you use Anypoint Studio or Flow Designer to configure a connector that uses external libraries, the dependencies are automatically added as shared libraries. For example, if you add Anypoint Connector for Database to your application and then configure the connection driver using Anypoint Studio, the driver is automatically added as a shared library in your project's `pom.xml` file. +If you use Anypoint Studio to configure a connector that uses external libraries, the dependencies are automatically added as shared libraries. For example, if you add Anypoint Connector for Database to your application and then configure the connection driver using Anypoint Studio, the driver is automatically added as a shared library in your project's `pom.xml` file. See xref:mmp-concept.adoc#configure-shared-libraries[Configure Shared Libraries] for configuration instructions. @@ -144,5 +144,4 @@ See xref:mmp-concept.adoc#configure-plugin-dependencies[Configure Plugin Depende == See Also -* xref:3.9@mule-runtime::classloader-control-in-mule.adoc[Mule 3 Class-loading] -* xref:1.1@mule-sdk::isolation.adoc[Mule SDK - About Class-loading Isolation] +* xref:mule-sdk::isolation.adoc[Mule SDK - About Class-loading Isolation] diff --git a/modules/ROOT/pages/about-components.adoc b/modules/ROOT/pages/about-components.adoc index 9df5524b8d..1d1cc2f22a 100644 --- a/modules/ROOT/pages/about-components.adoc +++ b/modules/ROOT/pages/about-components.adoc @@ -8,21 +8,12 @@ building blocks of flows in a Mule app. Core components provide the logic for processing a Mule event as it travels in a series of linked steps through the app. Examples include the Scheduler, For Each, and Logger components. -* In Studio, Mule components are accessible by clicking *Core* from the Mule palette. +In Studio, Mule components are accessible by clicking *Core* from the Mule palette. + image::components-core-studio.png[Core Components in Studio] + Notice that the components are subdivided into types, including Batch, Error Handling, and Flow Control. -+ -* In Design Center, when you are building a Mule app, you can find Mule -components listed among *Modules* in the *Select a Component* dialog. -+ -image::components-core-fd.png[Core Components in Design Center] -+ -Design Center provides many of the Core components described below. Though the -Design Center UI does not subdivide components into the types you see in the -Studio UI, it can help to conceptualize them by those types. == Batch @@ -55,7 +46,7 @@ data to a new output structure or format. == Endpoints -Endpoints (sometimes called Sources in Studio or Triggers in Design Center) include +Endpoints (sometimes called Sources in Studio) include components that initiate (or trigger) processing in a Mule flow. The xref:scheduler-concept.adoc[Scheduler] is an endpoint. It triggers a flow to start at a configurable interval. diff --git a/modules/ROOT/pages/about-flows.adoc b/modules/ROOT/pages/about-flows.adoc index 261262a06f..e1d8f721a1 100644 --- a/modules/ROOT/pages/about-flows.adoc +++ b/modules/ROOT/pages/about-flows.adoc @@ -12,7 +12,7 @@ An app can consist of a single flow, or it can break up processing into discrete flows and subflows that you add to the app and connect together. Mule apps in production environments typically use multiple flows and subflows to divide the app into functional modules or for -<> purposes. For example, one flow might +<> purposes. For example, one flow might receive a record and transform data into a given format that another flow processes in some special way. @@ -51,6 +51,7 @@ Because the contents of a subflow replace each Flow Reference component that ref For example, configuring a batch job inside a subflow causes the application to fail during deployment if the subflow is referenced from more than one Flow Reference component. The application fails to deploy because multiple instances of a batch job with the same job instance ID exist, which is not allowed. +[[error_handling]] == Error Handling Each flow (but not subflow) can have its own error handling. One reason for diff --git a/modules/ROOT/pages/about-mule-configuration.adoc b/modules/ROOT/pages/about-mule-configuration.adoc index 0318b74f4f..082f6ce24e 100644 --- a/modules/ROOT/pages/about-mule-configuration.adoc +++ b/modules/ROOT/pages/about-mule-configuration.adoc @@ -23,8 +23,8 @@ Global settings, such as the default transaction time-out, that apply to the ent Configuration Properties, message properties, and system properties. * xref:about-flows.adoc[Flows] + Combine components to define a message flow. -* xref:about-components#_endpoints[Sources (Endpoints or Triggers)] + -Trigger a flow. Sources are sometimes called Endpoints in Studio and Triggers in Flow Designer. +* xref:about-components#_endpoints[Sources (Endpoints)] + +Trigger a flow. Sources are sometimes called Endpoints in Studio. * xref:connectors::index.adoc[Connectors and Modules Configurations] + Declare configurations for any connectors and modules components used. * xref:about-components.adoc#_flow_control_routers[Routers] + diff --git a/modules/ROOT/pages/batch-error-handling-faq.adoc b/modules/ROOT/pages/batch-error-handling-faq.adoc index db9f9b74a4..dddf4de375 100644 --- a/modules/ROOT/pages/batch-error-handling-faq.adoc +++ b/modules/ROOT/pages/batch-error-handling-faq.adoc @@ -115,14 +115,19 @@ By default, Mule's batch jobs follow the first error handling strategy which hal [%header,cols="40a,30a,30a"] |=== -|Failed Record Handling Option 2+^|Batch Job -| | *Attribute* | *Value* -| Stop processing when a failed record is found. -| `maxFailedRecords`|`0` -| Continue processing indefinitely, regardless of the number of failed records. -| `maxFailedRecords` |`-1` -| Continue processing until reaching maximum number of failed records. -| `maxFailedRecords` | `integer` +|Failed Record Handling Option |Batch Job Attribute |Value + +|Stops processing when a failed record is found +|`maxFailedRecords` +|`0` + +|Continues processing indefinitely, regardless of the number of failed records +|`maxFailedRecords` +|`-1` + +|Continues processing until reaching the maximum number of failed records +|`maxFailedRecords` +|`integer` |=== [source,xml,linenums] diff --git a/modules/ROOT/pages/build-application-from-api.adoc b/modules/ROOT/pages/build-application-from-api.adoc index fe8989ab4b..0a39ca8678 100644 --- a/modules/ROOT/pages/build-application-from-api.adoc +++ b/modules/ROOT/pages/build-application-from-api.adoc @@ -76,7 +76,7 @@ Use this method if you want to start a project by either importing an existing R . In *API Implementation*, select *Specify API Definition File Location or URL*. . In *Location*, do one of the following: * If you created an `api.raml` file in Design Center, select *Design Center* . Login to Anypoint Platform if necessary, and select `api.raml`. -* If you didn’t create a RAML file in Design Center, select *Browse Files* and select the RAML or WSDL file that you created in a text editor. For a WSDL file, select a service and port from the drop-down menus or accept the defaults. +* If you didn't create a RAML file in Design Center, select *Browse Files* and select the RAML or WSDL file that you created in a text editor. For a WSDL file, select a service and port from the drop-down menus or accept the defaults. [start=6] . Accept the Location default options, and click *Finish*. diff --git a/modules/ROOT/pages/business-events.adoc b/modules/ROOT/pages/business-events.adoc index 352a503004..1392a9ccbe 100644 --- a/modules/ROOT/pages/business-events.adoc +++ b/modules/ROOT/pages/business-events.adoc @@ -40,5 +40,5 @@ This practice makes analysis and debugging easier and more intuitive at runtime. * xref:about-mule-event.adoc[Mule Events] * xref:transaction-management.adoc[Transaction Management] -* xref:business-events-in-components[Configure Default Events Tracking] +* xref:business-events-in-components.adoc[Configure Default Events Tracking] * xref:business-events-custom.adoc[Custom Business Event Component] diff --git a/modules/ROOT/pages/choosing-the-right-clustering-topology.adoc b/modules/ROOT/pages/choosing-the-right-clustering-topology.adoc index 5e705af124..fe6e59d4d7 100644 --- a/modules/ROOT/pages/choosing-the-right-clustering-topology.adoc +++ b/modules/ROOT/pages/choosing-the-right-clustering-topology.adoc @@ -4,7 +4,7 @@ include::_attributes.adoc[] endif::[] :keywords: deploy, cloudhub, on premises, on premise, clusters -You can deploy Mule in many different topologies. As you build your Mule application, it is important to think critically about how best to architect your application to achieve the desired availability, fault tolerance, and performance characteristics. This page outlines some of the solutions for achieving the right blend of these characteristics through clustering. There is no one correct approach for everyone, and designing your system is both an art and a science. If you need more assistance, MuleSoft Professional Services can help you by reviewing your architecture plan or designing it for you. For more information, http://www.mulesoft.com/contact[contact us]. +You can deploy Mule in many different topologies. As you build your Mule application, it is important to think critically about how best to architect your application to achieve the desired availability, fault tolerance, and performance characteristics. This page outlines some of the solutions for achieving the right blend of these characteristics through clustering when you deploy applications on premises. There is no one correct approach for everyone, and designing your system is both an art and a science. If you need more assistance, MuleSoft Professional Services can help you by reviewing your architecture plan or designing it for you. For more information, http://www.mulesoft.com/contact[contact us]. == About Clustering diff --git a/modules/ROOT/pages/consume-data-from-an-api.adoc b/modules/ROOT/pages/consume-data-from-an-api.adoc index 98d3064a16..2bd87a2b0a 100644 --- a/modules/ROOT/pages/consume-data-from-an-api.adoc +++ b/modules/ROOT/pages/consume-data-from-an-api.adoc @@ -79,7 +79,7 @@ POST, PUT, and DELETE requests almost always require headers. * URI and Query Parameters * Error handling -See xref:connectors::http-connector[HTTP Connector documentation] for more information about how to configure the request operation. +See xref:http-connector::index.adoc#input-sources[HTTP Connector documentation] for more information about how to configure the request operation. === Consume REST API Example diff --git a/modules/ROOT/pages/continuous-integration.adoc b/modules/ROOT/pages/continuous-integration.adoc index d6846cccab..3c6af2b1bf 100644 --- a/modules/ROOT/pages/continuous-integration.adoc +++ b/modules/ROOT/pages/continuous-integration.adoc @@ -33,7 +33,7 @@ You can deploy Mule applications using: * xref:api-manager::getting-started-proxy.adoc[The API Manager] * xref:runtime-manager::runtime-manager-agent.adoc[The Runtime Manager Agent] -You can create functional tests with xref:2.1@munit::index.adoc[MUnit Unit Testing]. +You can create functional tests with xref:munit::index.adoc[MUnit Unit Testing]. The mule-maven-plugin supports deployments to: @@ -66,4 +66,4 @@ If your target deployable is a web application and not a Mule application, consi == See Also * xref:using-maven-with-mule.adoc[Maven Support in Mule] -* xref:2.1@munit::index.adoc[MUnit Unit Testing] +* xref:munit::index.adoc[MUnit Unit Testing] diff --git a/modules/ROOT/pages/creating-and-managing-a-cluster-manually.adoc b/modules/ROOT/pages/creating-and-managing-a-cluster-manually.adoc index 5faa5da9fe..8b3e01be34 100644 --- a/modules/ROOT/pages/creating-and-managing-a-cluster-manually.adoc +++ b/modules/ROOT/pages/creating-and-managing-a-cluster-manually.adoc @@ -4,7 +4,7 @@ include::_attributes.adoc[] endif::[] :keywords: cluster, deploy -This page describes manual creation and configuration of a cluster. There are two ways to create and manage clusters: +There are two ways to create and manage clusters for on-premises deployments: * Using Runtime Manager + @@ -16,7 +16,12 @@ See xref:runtime-manager::cluster-about.adoc[Clusters] for configuration instruc * Do not mix cluster management tools. + Manual cluster configuration is not synced to Anypoint Runtime Manager, so any change you make in the platform overrides the cluster configuration files. To avoid this scenario, use only one method for creating and managing your clusters: either manual configuration or configuration using Anypoint Runtime Manager. -* All nodes in a cluster must have the same Mule runtime engine and Runtime Manager agent version. If you are using a cumulative patch release, such as 4.3.0-20210322, all instances of Mule must be the same cumulative patch version. +* All nodes in a cluster must have the same versions of: +** Mule runtime engine ++ +If you are using a cumulative patch release, such as 4.3.0-20210322, all instances of Mule must be the same cumulative patch version. +** Runtime Manager agent version +** Java == Creating a Cluster Manually @@ -87,6 +92,11 @@ Quorum feature is only valid for components that use Object Store. === Object Store Persistence +[NOTE] +-- +Ensure you set up a centralized JDBC store for the cluster object store persistence. Otherwise, shutting down all cluster nodes causes the content of object stores to be lost, no matter if the persistent setting is enabled on the object store configuration. +-- + You can persistently store JDBC data in a central system that is accessible by all cluster nodes when using Mule runtime engine on-premises. The following relational database systems are supported: diff --git a/modules/ROOT/pages/cryptography-jce.adoc b/modules/ROOT/pages/cryptography-jce.adoc deleted file mode 100644 index 669dbe9d58..0000000000 --- a/modules/ROOT/pages/cryptography-jce.adoc +++ /dev/null @@ -1,660 +0,0 @@ -= JCE Cryptography -ifndef::env-site,env-github[] -include::_attributes.adoc[] -endif::[] - -The JCE strategy enables you to use the wider range of cryptography capabilities provided by the Java Cryptography Extension. - -You can use cryptography capabilities in two ways: - -* Password-based encryption (PBE): + -This method enables you to encrypt and sign content by providing only an encryption password. -* Key-based encryption: + -Similar to how PGP and XML encryption works, this method enables you to configure a symmetric or asymmetric key to perform encryption and signing operations. - -You can encrypt all, or part of a message using any of these two methods. - -== PBE - -This method applies a hash function over the provided password to generate a symmetric key that is compatible with standard encryption algorithms. Because PBE only requires a password, a global configuration element is not needed for the PBE operations. - -=== Configure Password-Based Encryption from Anypoint Studio - -To configure PBE from Anypoint Studio, follow these steps: - -. From the Mule palette, add *Crypto* to your project. -+ -See xref:cryptography.adoc#install-crypto-module[Install the Extension] for instructions. -. Select the desired operation, and drag the component to the flow: -+ -image::mruntime-crypto-pbe-add.png[crypto-pbe-add] -. In the component view, configure the *Algorithm* and *Password* properties: -+ -image::mruntime-crypto-pbe-config.png[crypto-pbe-config] - -=== XML Examples - -The following are XML examples for each each of the PBE operations: - -* PBE Encryption -+ -[source,xml,linenums] ----- - ----- -+ -If no algorithm is specified, `PBEWithHmacSHA256AndAES_128` is used. - -* PBE Decryption -+ -[source,xml,linenums] ----- - ----- - -* PBE Signature -+ -[source,xml,linenums] ----- - ----- -+ -If no algorithm is specified, `PBEWithHmacSHA256` is used. - -* PBE Signature Validation -+ -[source,xml,linenums] ----- - ----- -+ -The `expected` parameter defines the signature used to validate the message. - -== Key-Based Encryption - -Configure a symmetric or asymmetric key to perform encryption and signing operations. - -=== Configure Key-Based Encryption from Anypoint Studio - -To configure key-based encryption operations from Anypoint Studio, follow these steps: - -. From the Mule palette, add *Crypto* to your project. -+ -See xref:cryptography.adoc#install-crypto-module[Install the Extension] for instructions. -. Select the desired operation, and drag the component to the flow: -+ -image::mruntime-crypto-jce-add.png[crypto-jce-add] -. Open the component properties and select an existing module configuration, or create a new one by specifying values for *Keystore*, *Type* (JKS, JCEKS, PKCS12), and *Password*. -+ -You can also add symmetric or asymmetric key information to be used in the sign operations: -+ -image::mruntime-crypto-jce-global-config.png[crypto-jce-global-config] -. Configure *Key selection* by using a *Key id* value previously defined in the module configuration, or define a new one for this operation: -+ -image::mruntime-crypto-jce-config.png[crypto-jce-config] -. Select the algorithm to use during the operation. - -=== XML Examples - -The following XML examples show a JCE configuration that defines symmetric and asymmetric keys and different operations using these keys. - -* Configuration -+ -In this example, a keystore with different types of keys is defined in a JCE configuration: -+ -[source,xml,linenums] ----- - - - - - - - - - ----- - -* Asymmetric Encryption -+ -The following example operations use the asymmetric keys defined in the previous configuration. -+ -.Encrypting a Message -[source,xml,linenums] ----- - ----- -+ -.Decrypting a Message -[source,xml,linenums] ----- - ----- - -* Symmetric Encryption -+ -The following example operations use the symmetric keys defined in the previous configuration. -+ -.Encrypting a Message -[source,xml,linenums] ----- - ----- -+ -.Decrypting a Message -[source,xml,linenums] ----- - ----- - -* Signature and Validation -+ -The following are examples of sign and validate operations that use a key defined in the previous configuration: -+ -.Signing a Message -[source,xml,linenums] ----- - ----- -+ -.Validating a Signature -[source,xml,linenums] ----- - ----- -+ -The `expected` parameter defines the signature used to validate the message. - -== Reference - -=== Module Configuration - -JCE configuration for Java keystores and inline keys. - -==== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -|Name | String | The name for this configuration. Connectors reference the configuration with this name. | | *x*{nbsp} -| Keystore a| String | +++Path to the keystore file.+++ | | {nbsp} -| Type a| Enumeration, one of: - -** `JKS` -** `JCEKS` -** `PKCS12` | +++Type of the keystore.+++ | `JKS` | {nbsp} -| Password a| String | +++Password for unlocking the keystore.+++ | | {nbsp} -| Jce Key Infos a| Array of One of: - -* <> -* <> | +++List of keys to be considered, with internal IDs for referencing them.+++ | | {nbsp} -| Expiration Policy a| <> | +++Configures the minimum amount of time that a dynamic configuration instance can remain idle before the runtime considers it eligible for expiration. This does not mean that the platform will expire the instance at the exact moment that it becomes eligible. The runtime will actually purge the instances when it sees it fit.+++ | | {nbsp} -|=== - -[[jceDecrypt]] -== Jce Decrypt Operation -`` - -+++ -Decrypt a stream using JCE, with a key. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Content a| Binary | You can decrypt all, or part of a message by using a DataWeave expression. + -For example, you can set Content to `#[payload.name]` to decrypt only an encrypted variable called `name` from the payload | `#[payload]` | {nbsp} -| Output Mime Type a| String | +++The mime type of the payload that this operation outputs.+++ | | {nbsp} -| Output Encoding a| String | +++The encoding of the payload that this operation outputs.+++ | | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | +++Configure if repeatable streams should be used and their behavior+++ | | {nbsp} -| Cipher a| String | A raw cipher string in the form "algorithm/mode/padding" according to the Java crypto documentation, for example `AES/CBC/PKCS5Padding`. Note that not all combinations are valid. | | {nbsp} -| Algorithm a| Enumeration, one of: - -** `AES` -** `AESWrap` -** `ARCFOUR` -** `Blowfish` -** `DES` -** `DESede` -** `RC2` -** `DESedeWrap` -** `RSA` a| Algorithm from a list of valid definitions. When you specify this field, Mule automatically selects the mode and padding to use according to the following list: - -* `AES/CBC/PKCS5Padding` -* `AESWrap/ECB/NoPadding` -* `ARCFOUR/ECB/NoPadding` -* `Blowfish/CBC/PKCS5Padding` -* `DES/CBC/PKCS5Padding` -* `DESede/CBC/PKCS5Padding` -* `RC2/CBC/PKCS5Padding` -* `DESedeWrap/CBC/NoPadding` -* `RSA/ECB/OAEPWithSHA-256AndMGF1Padding` | | {nbsp} -| Key Id a| String | +++The key ID, as defined in the JCE configuration.+++ | | {nbsp} -| Jce Key Info a| One of: - -* <> -* <> | +++An inline key definition.+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:KEY` {nbsp} -* `CRYPTO:PASSPHRASE` {nbsp} -* `CRYPTO:PARAMETERS` {nbsp} -* `CRYPTO:DECRYPTION` {nbsp} - - -[[jceEncrypt]] -== Jce Encrypt Operation -`` - -+++ -Encrypt a stream using JCE, with a key. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Content a| Binary | You can encrypt all, or part of a message by using a DataWeave expression. + -For example, you can set Content to `#[payload.name]` to encrypt only a variable called `name` from the payload | `#[payload]` | {nbsp} -| Output Mime Type a| String | +++The mime type of the payload that this operation outputs.+++ | | {nbsp} -| Output Encoding a| String | +++The encoding of the payload that this operation outputs.+++ | | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | +++Configure if repeatable streams should be used and their behavior+++ | | {nbsp} -| Cipher a| String | A raw cipher string in the form "algorithm/mode/padding" according to the Java crypto documentation, for example `AES/CBC/PKCS5Padding`. Note that not all combinations are valid. | | {nbsp} -| Algorithm a| Enumeration, one of: - -** `AES` -** `AESWrap` -** `ARCFOUR` -** `Blowfish` -** `DES` -** `DESede` -** `RC2` -** `DESedeWrap` -** `RSA` a| Algorithm from a list of valid definitions. When you specify this field, Mule automatically selects the mode and padding to use according to the following list: - -* `AES/CBC/PKCS5Padding` -* `AESWrap/ECB/NoPadding` -* `ARCFOUR/ECB/NoPadding` -* `Blowfish/CBC/PKCS5Padding` -* `DES/CBC/PKCS5Padding` -* `DESede/CBC/PKCS5Padding` -* `RC2/CBC/PKCS5Padding` -* `DESedeWrap/CBC/NoPadding` -* `RSA/ECB/OAEPWithSHA-256AndMGF1Padding` | | {nbsp} -| Key Id a| String | +++The key ID, as defined in the JCE configuration.+++ | | {nbsp} -| Jce Key Info a| One of: - -* <> -* <> | +++An inline key definition.+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:ENCRYPTION` {nbsp} -* `CRYPTO:KEY` {nbsp} -* `CRYPTO:PARAMETERS` {nbsp} - -[[jceSign]] -== Jce Sign Operation -`` - -+++ -Sign a stream using JCE, with a key. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Content a| Binary | +++The content to sign+++ | `#[payload]` | {nbsp} -| Algorithm a| Enumeration, one of: - -** `MD2withRSA` -** `MD5withRSA` -** `SHA1withRSA` -** `SHA224withRSA` -** `SHA256withRSA` -** `SHA384withRSA` -** `SHA512withRSA` -** `NONEwithDSA` -** `SHA1withDSA` -** `SHA224withDSA` -** `SHA256withDSA` -** `HmacMD5` -** `HmacSHA1` -** `HmacSHA224` -** `HmacSHA256` -** `HmacSHA384` -** `HmacSHA512` | +++The algorithm used for signing+++ | `HmacSHA256` | {nbsp} -| Output Mime Type a| String | +++The mime type of the payload that this operation outputs.+++ | | {nbsp} -| Key Id a| String | +++The key ID, as defined in the JCE configuration.+++ | | {nbsp} -| Jce Key Info a| One of: - -* <> -* <> | +++An inline key definition.+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -String - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:KEY` {nbsp} -* `CRYPTO:PASSPHRASE` {nbsp} -* `CRYPTO:SIGNATURE` {nbsp} - - -[[jceValidate]] -== Jce Validate Operation -`` - -+++ -Validate a stream against a signature, using a key. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Value a| Binary | +++the message to authenticate+++ | `#[payload]` | {nbsp} -| Expected a| String | +++the signature to validate+++ | | *x*{nbsp} -| Algorithm a| Enumeration, one of: - -** `MD2withRSA` -** `MD5withRSA` -** `SHA1withRSA` -** `SHA224withRSA` -** `SHA256withRSA` -** `SHA384withRSA` -** `SHA512withRSA` -** `NONEwithDSA` -** `SHA1withDSA` -** `SHA224withDSA` -** `SHA256withDSA` -** `HmacMD5` -** `HmacSHA1` -** `HmacSHA224` -** `HmacSHA256` -** `HmacSHA384` -** `HmacSHA512` | +++The algorithm used for signing+++ | `HmacSHA256` | {nbsp} -| Key Id a| String | +++The key ID, as defined in the JCE configuration.+++ | | {nbsp} -| Jce Key Info a| One of: - -* <> -* <> | +++An inline key definition.+++ | | {nbsp} -|=== - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:VALIDATION` {nbsp} - -[[jceDecryptPbe]] -== Jce Decrypt Pbe Operation -`` - -+++ -Decrypt a stream using JCE, with a password. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Content a| Binary | You can decrypt all, or part of a message by using a DataWeave expression. + -For example, you can set Content to `#[payload.name]` to decrypt only an encrypted variable called `name` from the payload | `#[payload]` | {nbsp} -| Algorithm a| Enumeration, one of: - -** `PBEWithMD5AndDES` -** `PBEWithMD5AndTripleDES` -** `PBEWithSHA1AndDESede` -** `PBEWithSHA1AndRC2_40` -** `PBEWithSHA1AndRC2_128` -** `PBEWithSHA1AndRC4_40` -** `PBEWithSHA1AndRC4_128` -** `PBEWithHmacSHA1AndAES_128` -** `PBEWithHmacSHA224AndAES_128` -** `PBEWithHmacSHA256AndAES_128` -** `PBEWithHmacSHA384AndAES_128` -** `PBEWithHmacSHA512AndAES_128` -** `PBEWithHmacSHA1AndAES_256` -** `PBEWithHmacSHA224AndAES_256` -** `PBEWithHmacSHA256AndAES_256` -** `PBEWithHmacSHA384AndAES_256` -** `PBEWithHmacSHA512AndAES_256` | +++The algorithm for generating a key from the password+++ | `PBEWithHmacSHA256AndAES_128` | {nbsp} -| Password a| String | +++The password for decryption+++ | | *x*{nbsp} -| Output Mime Type a| String | +++The mime type of the payload that this operation outputs.+++ | | {nbsp} -| Output Encoding a| String | +++The encoding of the payload that this operation outputs.+++ | | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | +++Configure if repeatable streams should be used and their behavior+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:KEY` {nbsp} -* `CRYPTO:PASSPHRASE` {nbsp} -* `CRYPTO:PARAMETERS` {nbsp} -* `CRYPTO:DECRYPTION` {nbsp} - -[[jceEncryptPbe]] -== Jce Encrypt Pbe Operation -`` - -+++ -Encrypt a stream using JCE, with a password. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Content a| Binary | You can encrypt all, or part of a message by using a DataWeave expression. + -For example, you can set Content to `#[payload.name]` to encrypt only a variable called `name` from the payload | `#[payload]` | {nbsp} -| Algorithm a| Enumeration, one of: - -** `PBEWithMD5AndDES` -** `PBEWithMD5AndTripleDES` -** `PBEWithSHA1AndDESede` -** `PBEWithSHA1AndRC2_40` -** `PBEWithSHA1AndRC2_128` -** `PBEWithSHA1AndRC4_40` -** `PBEWithSHA1AndRC4_128` -** `PBEWithHmacSHA1AndAES_128` -** `PBEWithHmacSHA224AndAES_128` -** `PBEWithHmacSHA256AndAES_128` -** `PBEWithHmacSHA384AndAES_128` -** `PBEWithHmacSHA512AndAES_128` -** `PBEWithHmacSHA1AndAES_256` -** `PBEWithHmacSHA224AndAES_256` -** `PBEWithHmacSHA256AndAES_256` -** `PBEWithHmacSHA384AndAES_256` -** `PBEWithHmacSHA512AndAES_256` | +++The algorithm for generating a key from the password+++ | `PBEWithHmacSHA256AndAES_128` | {nbsp} -| Password a| String | +++The password for encryption+++ | | *x*{nbsp} -| Output Mime Type a| String | +++The mime type of the payload that this operation outputs.+++ | | {nbsp} -| Output Encoding a| String | +++The encoding of the payload that this operation outputs.+++ | | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | +++Configure if repeatable streams should be used and their behavior+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:ENCRYPTION` {nbsp} -* `CRYPTO:KEY` {nbsp} -* `CRYPTO:PARAMETERS` {nbsp} - -[[jceSignPbe]] -== Jce Sign Pbe Operation -`` - -+++ -Sign a stream using JCE, with a key. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Content a| Binary | +++the content to sign+++ | `#[payload]` | {nbsp} -| Algorithm a| Enumeration, one of: - -** `HmacPBESHA1` -** `PBEWithHmacSHA1` -** `PBEWithHmacSHA224` -** `PBEWithHmacSHA256` -** `PBEWithHmacSHA384` -** `PBEWithHmacSHA512` | +++The algorithm used for signing+++ | `PBEWithHmacSHA256` | {nbsp} -| Password a| String | +++The password used to sign+++ | | *x*{nbsp} -| Output Mime Type a| String | +++The mime type of the payload that this operation outputs.+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -String - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:KEY` {nbsp} -* `CRYPTO:PASSPHRASE` {nbsp} -* `CRYPTO:SIGNATURE` {nbsp} - -[[jceValidatePbe]] -== Jce Validate Pbe Operation -`` - -+++ -Validate a stream against a signature, using a key. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Value a| Binary | +++the message to authenticate+++ | `#[payload]` | {nbsp} -| Expected a| String | +++the signature to validate+++ | | *x*{nbsp} -| Algorithm a| Enumeration, one of: - -** `HmacPBESHA1` -** `PBEWithHmacSHA1` -** `PBEWithHmacSHA224` -** `PBEWithHmacSHA256` -** `PBEWithHmacSHA384` -** `PBEWithHmacSHA512` | +++The algorithm used for signing+++ | `PBEWithHmacSHA256` | {nbsp} -| Password a| String | +++The password used to sign+++ | | *x*{nbsp} -|=== - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:VALIDATION` {nbsp} - -== Types Definition -[[ExpirationPolicy]] -=== Expiration Policy - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Max Idle Time a| Number | A scalar time value for the maximum amount of time a dynamic configuration instance should be allowed to be idle before it's considered eligible for expiration | | -| Time Unit a| Enumeration, one of: - -** `NANOSECONDS` -** `MICROSECONDS` -** `MILLISECONDS` -** `SECONDS` -** `MINUTES` -** `HOURS` -** `DAYS` | A time unit that qualifies the maxIdleTime attribute | | -|=== - -[[repeatable-in-memory-stream]] -=== Repeatable In Memory Stream - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Initial Buffer Size a| Number | This is the amount of memory that will be allocated in order to consume the stream and provide random access to it. If the stream contains more data than can be fit into this buffer, then it will be expanded by according to the `bufferSizeIncrement` attribute, with an upper limit of `maxInMemorySize`. | | -| Buffer Size Increment a| Number | This is by how much will be buffer size by expanded if it exceeds its initial size. Setting a value of zero or lower will mean that the buffer should not expand, meaning that a `STREAM_MAXIMUM_SIZE_EXCEEDED` error will be raised when the buffer gets full. | | -| Max Buffer Size a| Number | This is the maximum amount of memory that will be used. If more than that is used then a `STREAM_MAXIMUM_SIZE_EXCEEDED` error will be raised. A value lower or equal to zero means no limit. | | -| Buffer Unit a| Enumeration, one of: - -** `BYTE` -** `KB` -** `MB` -** `GB` | The unit in which all these attributes are expressed | | -|=== - -[[repeatable-file-store-stream]] -=== Repeatable File Store Stream - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Max In Memory Size a| Number | Defines the maximum memory that the stream should use to keep data in memory. If more than that is consumed then it will start to buffer the content on disk. | | -| Buffer Unit a| Enumeration, one of: - -** `BYTE` -** `KB` -** `MB` -** `GB` | The unit in which maxInMemorySize is expressed | | -|=== - -[[JceAsymmetricKeyInfo]] -=== Jce Asymmetric Key Info - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Key Id a| String | Internal key ID for referencing from operations. | | x -| Alias a| String | Alias of the key in the keystore. | | x -| Password a| String | Password used to unlock the private part of the key. | | -|=== - -[[JceSymmetricKeyInfo]] -=== Jce Symmetric Key Info - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Key Id a| String | Internal key ID for referencing from operations. | | x -| Alias a| String | Alias of the key in the keystore. | | x -| Password a| String | Password used to unlock the key. | | x -|=== diff --git a/modules/ROOT/pages/cryptography-pgp.adoc b/modules/ROOT/pages/cryptography-pgp.adoc deleted file mode 100644 index 8d030d14d8..0000000000 --- a/modules/ROOT/pages/cryptography-pgp.adoc +++ /dev/null @@ -1,618 +0,0 @@ -= PGP Cryptography -ifndef::env-site,env-github[] -include::_attributes.adoc[] -endif::[] -:keywords: cryptography, module, sign, encrypt, pgp, AES - -Mule can encrypt all or part of a message using Pretty Good Privacy (PGP). PGP combines data compression and data encryption to secure messages. The compression reduces the size of the payload to help reduce the transmission time later on your application. - -Due to its increased complexity, PGP encryption is a heavy-load task when compared to JCE or XML encryption. - -This section addresses these scenarios: - -* Encryption: Using another party's public key to encrypt an outgoing message in a Mule app. -* Decryption: Using your own private key to decrypt an incoming message in a Mule app. - -== Prerequisites - -This document assumes that you are reasonably familiar with PGP encryption, as well as the concepts of public and private keys and asymmetric cryptography. - -== Configure PGP Encryption from Anypoint Studio - -To configure PGP encryption from Anypoint Studio, follow these steps: - -. From the Mule palette, add *Crypto* to your project. -+ -See xref:cryptography.adoc#install-crypto-module[Install the Extension] for instructions. -. Select the desired operation, and drag the component to the flow: -+ -image::mruntime-crypto-pgp-add.png[crypto-pgp-add] -. Open the component properties and select an existing *Module configuration*, or create a new one by specifying the *Public keyring* file and the *Private keyring* file. -+ -You can also add asymmetric key information to be used in the sign operations: -+ -image::mruntime-crypto-pgp-global-config.png[crypto-pgp-global-config] -. Configure *Key selection* by using a *Key id* value previously defined in the module configuration, or define a new one for this operation: -+ -image::mruntime-crypto-pgp-config.png[crypto-pgp-config] -. Select the algorithm to use during the operation. - -== Working with Subkeys - -A key can contain subkeys, according to the RFC-4880 standard specification. When working with subkeys, use the `fingerprint` attribute of the `` element in the XML configuration to specify the key to use. - -For example, if you use different keys for signing and encrypting operations, and each of these keys use different algorithms, like DSA and ElGamal, then you must reference the appropriate key's fingerprint, depending on the operation you want to perform. - -In this case, you reference the ElGamal fingerprint to encrypt your messages and the DSA fingerprint to sign your messages. - -== Encrypting Messages with Public Keys - -During PGP encryption, the sender of the message must encrypt its content using the receiver's public key. So, whenever you want to encrypt messages in your Mule app using someone else's public key, you must add the public key to your key ring. When adding a new PGP configuration to your Mule app, you need to provide your key ring file so the encryption module can get the public key from it to encrypt the message. - -. Use a tool such as GPG Suite to import the other party's public key. See below for details. -. Using the same tool, export the public key, selecting `binary` as the output format. This produces a key ring file with a `.gpg` extension. -. Ensure that the key ring (`.gpg`) file is stored where the Mule app can access it during runtime. - -.Example: PGP Configuration -[source,xml,linenums] ----- - - - - - ----- - -* Using the Encrypt Operation -+ -The next example returns an ASCII-armored encrypted payload, which is suitable for sending over plain-text channels: -+ -[source,xml,linenums] ----- - ----- - -* Using the Encrypt Binary Operation -+ -If you want to return a binary output instead, you can use the `pgp-encrypt-binary` operation: -+ -[source,xml,linenums] ----- - ----- -+ -Producing a binary output is faster than using ASCII-armored. However, the output is not standard and might not be ideal to send to other systems for decryption. - -* Using the Binary to Armored Operation -+ -If you need to send a payload with binary output to another system, you can transform it to ASCII-armored: -+ -[source,xml,linenums] ----- - ----- -This operation has a single input parameter, the message payload to transform. - -== Encrypt and Sign - -In addition to encrypting, you can atomically encrypt and sign a message, which returns a message (in ASCII-armored format) similar to the encrypted one. In this case, the returned message also has a signature inside its encrypted contents. The signature provides an integrity check of the original message. - -To encrypt and sign, the signer private key (which is usually the sender) must be in the public key ring. This process always produces an ASCII-armored output. - -.Example: PGP Configuration -[source,xml,linenums] ----- - - - - ----- - -== Decrypt - -During PGP decryption, the receiver of the message must use its private key to decrypt the contents of a message that was encrypted using a public key. -Therefore, the receiver must distribute its public key to those who will use it to send encrypted messages. - -.Example: PGP Configuration -[source,xml,linenums] ----- - - - - - ----- -In the example above, notice that you need to provide at least three parameters to be able to use the private key ring in the decrypt operation: - -* Key ID (`keyId`): the internal ID that will allow you to reference this key from an operation. -* Key Fingerprint (`fingerprint`): The last 16 characters of your key fingerprint, which can be obtained from your external GPG tool (such as GPG Keychain). -* Passphrase (`passphrase`): The passphrase of the private key. - -.Example: Using the Decrypt Operation -[source,xml,linenums] ----- - ----- - -== Signing - -Sign a message using a configured private key. - -.Example: PGP Configuration -[source,xml,linenums] ----- - - - - - ----- - -.Example: Using the Sign Operation -[source,xml,linenums] ----- - ----- - -== Validating a Signature - -Validate the signature of a message using the signer's public key. - -.Example: PGP Configuration -[source,xml,linenums] ----- - - - - - ----- - -.Example: Using the Validate Operation -[source,xml,linenums] ----- - ----- - -== Reference - -=== Module Configuration - -Keystore configuration for PGP. Contains a list of keys with internal names to be used in the operations. - -==== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -|Name | String | The name for this configuration. Connectors reference the configuration with this name. | | *x*{nbsp} -| Public Keyring a| String | +++Public key ring file.+++ | | {nbsp} -| Private Keyring a| String | +++Private key ring file.+++ | | {nbsp} -| Pgp Key Infos a| Array of One of: - -* <> | +++List of keys to be considered, with internal IDs for referencing them.+++ | | {nbsp} -| Expiration Policy a| <> | +++Configures the minimum amount of time that a dynamic configuration instance can remain idle before the runtime considers it eligible for expiration. This does not mean that the platform will expire the instance at the exact moment that it becomes eligible. The runtime will actually purge the instances when it sees it fit.+++ | | {nbsp} -|=== - -[[pgpDecrypt]] -== Pgp Decrypt Operation -`` - -+++ -Decrypt a stream using PGP, giving the original data as a result. The decryption is done with the private key, so the secret passphrase must be provided. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Content a| Binary | You can decrypt all, or part of a message by using a DataWeave expression. + -For example, you can set Content to `#[payload.name]` to decrypt only an encrypted variable called `name` from the payload | `#[payload]` | {nbsp} -| File Name a| String | +++the internal file name to decrypt, if not present the first will be used+++ | | {nbsp} -| Output Mime Type a| String | +++The mime type of the payload that this operation outputs.+++ | | {nbsp} -| Output Encoding a| String | +++The encoding of the payload that this operation outputs.+++ | | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | +++Configure if repeatable streams should be used and their behavior+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -| Validate Signature if Found a| Boolean | +++If a contents signature is found in any of the internal decryption stages, the operation will attempt to validate the decrypted contents. Note that this requires the *Signer's public key* to be present in the operation config's public keyring. Also, if the validation fails, the decryption operation will also.+++ | `false` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:KEY` {nbsp} -* `CRYPTO:PASSPHRASE` {nbsp} -* `CRYPTO:DECRYPTION` {nbsp} - - -[[pgpEncrypt]] -== Pgp Encrypt Operation -`` - -+++ -Encrypt a stream using PGP, giving an ASCII-armored stream output as a result. The encryption is done with the public key of the recipient, so the secret passphrase is not required. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Content a| Binary | You can encrypt all, or part of a message by using a DataWeave expression. + -For example, you can set Content to `#[payload.name]` to encrypt only a variable called `name` from the payload | `#[payload]` | {nbsp} -| Algorithm a| Enumeration, one of: - -** `IDEA` -** `TRIPLE_DES` -** `CAST5` -** `BLOWFISH` -** `SAFER` -** `DES` -** `AES_128` -** `AES_192` -** `AES_256` -** `TWOFISH` -** `CAMELLIA_128` -** `CAMELLIA_192` -** `CAMELLIA_256` | +++the symmetric algorithm to use for encryption+++ | `AES_256` | {nbsp} -| File Name a| String | +++the internal file name to use in the resulting PGP header+++ | +++stream+++ | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | +++Configure if repeatable streams should be used and their behavior+++ | | {nbsp} -| Key Id a| String | +++The key ID, as defined in the PGP configuration.+++ | | {nbsp} -| Pgp Key Info a| One of: - -* <> | +++An inline key definition.+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:ENCRYPTION` {nbsp} -* `CRYPTO:KEY` {nbsp} - -[[pgpEncryptAndSign]] -== PGP Encrypt and Sign Operation - -`` - -You can encrypt and sign a stream using PGP, producing an ASCII-armored stream output as a result. The encryption requires the public key of the recipient, so the secret passphrase is not required. The secret passphrase is required for signing because the process uses private key of the signer (usually the sender). - - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x* -| Content a| Binary | You can encrypt all, or part of a message by using a DataWeave expression. + -For example, you can set Content to `#[payload.name]` to encrypt only a variable called `name` from the payload | `#[payload]` | {nbsp} -| Algorithm a| Enumeration, one of: - -** `IDEA` -** `TRIPLE_DES` -** `CAST5` -** `BLOWFISH` -** `SAFER` -** `DES` -** `AES_128` -** `AES_192` -** `AES_256` -** `TWOFISH` -** `CAMELLIA_128` -** `CAMELLIA_192` -** `CAMELLIA_256` | Symmetric algorithm to use for encryption. | `AES_256` | {nbsp} -| File Name a| String | The internal file name to use in the resulting PGP header. | stream | {nbsp} -| Encryption key selection a| A child element with <> | The identifier of the recipient public key. | | {nbsp} -| Sign key selection a| A child element with <> | The identifier of the signer's private key. | | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | Configure if repeatable streams should be used and their behavior | | {nbsp} -| Key Id a| String | The key ID, as defined in the PGP configuration. | | {nbsp} -| Target Variable a| String | The name of a variable in which to store the operation's output. | | {nbsp} -| Target Value a| String | An expression that is evaluated against the operation's output. The result of that expression is stored in the target variable. | `#[payload]` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:ENCRYPTION` {nbsp} -* `CRYPTO:KEY` {nbsp} - - -[[pgpEncryptBinary]] -== PGP Encrypt Binary Operation -`` - -You can encrypt a stream using PGP, producing a binary output as a result. Because the encryption process uses the public key of the recipient, the secret passphrase is not required. - - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Content a| Binary | You can encrypt all, or part of a message by using a DataWeave expression. + -For example, you can set Content to `#[payload.name]` to encrypt only a variable called `name` from the payload | `#[payload]` | {nbsp} -| Algorithm a| Enumeration, one of: - -** `IDEA` -** `TRIPLE_DES` -** `CAST5` -** `BLOWFISH` -** `SAFER` -** `DES` -** `AES_128` -** `AES_192` -** `AES_256` -** `TWOFISH` -** `CAMELLIA_128` -** `CAMELLIA_192` -** `CAMELLIA_256` | +++the symmetric algorithm to use for encryption+++ | `AES_256` | {nbsp} -| File Name a| String | +++the internal file name to use in the resulting PGP header+++ | +++stream+++ | {nbsp} -| Output Mime Type a| String | +++The mime type of the payload that this operation outputs.+++ | | {nbsp} -| Output Encoding a| String | +++The encoding of the payload that this operation outputs.+++ | | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | +++Configure if repeatable streams should be used and their behavior+++ | | {nbsp} -| Key Id a| String | +++The key ID, as defined in the PGP configuration.+++ | | {nbsp} -| Pgp Key Info a| One of: - -* <> | +++An inline key definition.+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:ENCRYPTION` {nbsp} -* `CRYPTO:KEY` {nbsp} - - -[[pgpSign]] -== Pgp Sign Operation -`` - -+++ -Create a detached (standalone) PGP signature of the stream. The signing is done with the private key of the sender, so the secret passphrase must be provided. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Content a| Binary | +++the content to sign+++ | `#[payload]` | {nbsp} -| Algorithm a| Enumeration, one of: - -** `MD5` -** `RIPEMD160` -** `TIGER_192` -** `HAVAL_5_160` -** `DOUBLE_SHA` -** `SHA1` -** `SHA224` -** `SHA256` -** `SHA384` -** `SHA512` | +++the digest (or hashing) algorithm+++ | `SHA256` | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | +++Configure if repeatable streams should be used and their behavior+++ | | {nbsp} -| Key Id a| String | +++The key ID, as defined in the PGP configuration.+++ | | {nbsp} -| Pgp Key Info a| One of: - -* <> | +++An inline key definition.+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:KEY` {nbsp} -* `CRYPTO:PASSPHRASE` {nbsp} -* `CRYPTO:SIGNATURE` {nbsp} - - -[[pgpSignBinary]] -== Pgp Sign Binary Operation -`` - -+++ -Create a detached (standalone) PGP signature of the stream. The signing is done with the private key of the sender, so the secret passphrase must be provided. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Content a| Binary | +++the content to sign+++ | `#[payload]` | {nbsp} -| Algorithm a| Enumeration, one of: - -** `MD5` -** `RIPEMD160` -** `TIGER_192` -** `HAVAL_5_160` -** `DOUBLE_SHA` -** `SHA1` -** `SHA224` -** `SHA256` -** `SHA384` -** `SHA512` | +++the digest (or hashing) algorithm+++ | `SHA256` | {nbsp} -| Output Mime Type a| String | +++The mime type of the payload that this operation outputs.+++ | | {nbsp} -| Output Encoding a| String | +++The encoding of the payload that this operation outputs.+++ | | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | +++Configure if repeatable streams should be used and their behavior+++ | | {nbsp} -| Key Id a| String | +++The key ID, as defined in the PGP configuration.+++ | | {nbsp} -| Pgp Key Info a| One of: - -* <> | +++An inline key definition.+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:KEY` {nbsp} -* `CRYPTO:PASSPHRASE` {nbsp} -* `CRYPTO:SIGNATURE` {nbsp} - - -[[pgpValidate]] -== Pgp Validate Operation -`` - -+++ -Validate a PGP signature against a stream, to authenticate it. The validation is done with the public key of the sender, so the secret passphrase is not required. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Value a| Binary | +++the message to authenticate+++ | `#[payload]` | {nbsp} -| Expected a| Binary | +++the signature+++ | | *x*{nbsp} -|=== - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:VALIDATION` {nbsp} - - -[[pgpBinaryToArmored]] -== Pgp Binary To Armored Operation -`` - -+++ -Converts an encrypted PGP message or a PGP signature to an ASCII armored representation, suitable for plain text channels. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Content a| Binary | +++the content to convert+++ | `#[payload]` | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | +++Configure if repeatable streams should be used and their behavior+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:PARAMETERS` {nbsp} - -== Types -[[ExpirationPolicy]] -=== Expiration Policy - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Max Idle Time a| Number | A scalar time value for the maximum amount of time a dynamic configuration instance should be allowed to be idle before it's considered eligible for expiration | | -| Time Unit a| Enumeration, one of: - -** `NANOSECONDS` -** `MICROSECONDS` -** `MILLISECONDS` -** `SECONDS` -** `MINUTES` -** `HOURS` -** `DAYS` | A time unit that qualifies the maxIdleTime attribute | | -|=== - -[[repeatable-in-memory-stream]] -=== Repeatable In Memory Stream - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Initial Buffer Size a| Number | This is the amount of memory that will be allocated in order to consume the stream and provide random access to it. If the stream contains more data than can be fit into this buffer, then it will be expanded by according to the `bufferSizeIncrement` attribute, with an upper limit of `maxInMemorySize`. | | -| Buffer Size Increment a| Number | This is by how much will be buffer size by expanded if it exceeds its initial size. Setting a value of zero or lower will mean that the buffer should not expand, meaning that a `STREAM_MAXIMUM_SIZE_EXCEEDED` error will be raised when the buffer gets full. | | -| Max Buffer Size a| Number | This is the maximum amount of memory that will be used. If more than that is used then a `STREAM_MAXIMUM_SIZE_EXCEEDED` error will be raised. A value lower or equal to zero means no limit. | | -| Buffer Unit a| Enumeration, one of: - -** `BYTE` -** `KB` -** `MB` -** `GB` | The unit in which all these attributes are expressed | | -|=== - -[[repeatable-file-store-stream]] -=== Repeatable File Store Stream - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Max In Memory Size a| Number | Defines the maximum memory that the stream should use to keep data in memory. If more than that is consumed then it will start to buffer the content on disk. | | -| Buffer Unit a| Enumeration, one of: - -** `BYTE` -** `KB` -** `MB` -** `GB` | The unit in which maxInMemorySize is expressed | | -|=== - -[[PgpAsymmetricKeyInfo]] -=== Pgp Asymmetric Key Info - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Key Id a| String | Internal key ID for referencing from operations. | | x -| Key Pair Identifier a| <> | A way to identify the key inside the keystore. | | x -| Passphrase a| String | The password for unlocking the secret part of the key. | | -|=== - -[[PgpAsymmetricKeyIdentifier]] -=== PGP Asymmetric Key Identifier - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Fingerprint a| String | The Fingerprint of the configured key | | -| Principal a| String | A combination of name and email specified while generating the key. When you use this field, you use the primary key. -Do not use this field when you are using keys with different algorithms to sign and encrypt (for example, DSA and Elgamal); specify the proper key by using the `fingerprint` property instead. | | -|=== - -[[PgpKeySelection]] -=== PGP Key Selection - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Key is a| String | Internal key ID for referencing from operations | | -| Pgp Key info a| One of: - -* <> | An inline key definition. | | -|=== diff --git a/modules/ROOT/pages/cryptography-reference.adoc b/modules/ROOT/pages/cryptography-reference.adoc deleted file mode 100644 index fef5686db4..0000000000 --- a/modules/ROOT/pages/cryptography-reference.adoc +++ /dev/null @@ -1,95 +0,0 @@ -= General Operations -ifndef::env-site,env-github[] -include::_attributes.adoc[] -endif::[] - -The Cryptography module provides operations to calculate and validate a checksum to check data for errors. These operations are independent of the encryption strategy used. - -== Checksum Overview - -Checksum operations enable you to ensure message integrity. The Calculate Checksum operation acts as an enricher to generate a checksum for a message when it enters a system, and then the Validate Checksum operation acts as a filter to verify the checksum when the message leaves the system. If the entry and exit values do not match, a `CRYPTO:VALIDATION` error is raised. - -This pair of operations enables you to verify that a message remains intact between the sender and the receiver. Because checksum operations do not provide encryption or append a signature to the message, you can use the operations in conjunction with any other security features. - -[[calculateChecksum]] -== Calculate Checksum -`` - -+++ -Calculates the checksum of a given content or value, which can be an expression. You can select the hashing algorithm to use. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Algorithm a| Enumeration, one of: - -** `CRC32` -** `MD2` -** `MD5` -** `SHA_1` -** `SHA_256` -** `SHA_512` | +++the checksum algorithm+++ | `SHA_256` | {nbsp} -| Content a| Binary | +++The content for calculating the checksum+++ | `#[payload]` | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -String - -=== Throws -* `CRYPTO:CHECKSUM` {nbsp} - - -[[validateChecksum]] -== Validate Checksum -`` - -+++ -Validates the checksum of the content or value against the checksum previously calculated using the Calculate Checksum operation. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Algorithm a| Enumeration, one of: - -** `CRC32` -** `MD2` -** `MD5` -** `SHA_1` -** `SHA_256` -** `SHA_512` | +++The checksum algorithm+++ | `SHA_256` | {nbsp} -| Value a| Binary | +++The content for calculating the checksum+++ | `#[payload]` | {nbsp} -| Expected a| String | +++The expected checksum as an hexadecimal string+++ | | *x*{nbsp} -|=== - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:VALIDATION` {nbsp} - -== Configure Use Random Initialization Vectors - -You can enable the *Use random IVs* field to use random initialization vectors (IVs). If you enable this field, the decryption algorithm assumes IVs are prepended to the ciphertext during the decryption operation. To configure this field in Anypoint Studio, follow these steps: - -. In Studio, drag a Cryptography module operation to your flow, for example *Jce sign*. -. Select the operation from the flow. -. In the operation configuration screen, click the plus sign to access the module global configuration. -. In the *Global Element Properties* window, enable the *Use random IVs* field. - -image::crypto-random-iv.png[Use random IVs field selected] - -In the *XML editor* window, the *Use random IVs* field looks like this: - -[source,xml,linenums] ----- -crypto:jce-config name="Crypto_Jce" doc:name="Crypto Jce" keystore="/Users/MuleSoft/Desktop/jcekeystore.jks" password="mulesoft" useRandomIVs="true"> - - - - ----- \ No newline at end of file diff --git a/modules/ROOT/pages/cryptography-troubleshooting.adoc b/modules/ROOT/pages/cryptography-troubleshooting.adoc deleted file mode 100644 index 20f79b94d7..0000000000 --- a/modules/ROOT/pages/cryptography-troubleshooting.adoc +++ /dev/null @@ -1,97 +0,0 @@ -= Troubleshoot Cryptography Module - -To troubleshoot the Cryptography module, become familiar with app logs, PGP protocol, attached and detached signatures, and interpreting commonly thrown messages. - -== View the App Log - -If you encounter problems while running your Mule runtime engine (Mule) app, you can view the app log as follows: - -* If you’re running the app from Anypoint Platform, the output is visible in the Anypoint Studio console window. -* If you’re running the app using Mule from the command line, the app log is visible in your OS console. - -Unless the log file path is customized in the app’s log file `log4j2.xml`, view the app log in the default location `MULE_HOME/logs/.log`. - -== Enable Cryptography Module Debug Logging - -To begin troubleshooting the Cryptography module, enable debug logging to see the exact error messages: - -. Access Anypoint Studio and navigate to the *Package Explorer* view. -. Open your application by clicking the project name. -. Open the `src/main/resources` path folder. -. Open the `log4j2.xml` file inside the folder. -. Add the following line: -+ -`` - -[start=6] -. Save your changes. -. Click the project name in *Package Explorer* and then click *Run* > *Run As* > *Mule Application*. - -== Understand PGP Encryption and Decryption Configuration - -During PGP encryption, the sender of the message must encrypt its content using the receiver’s _public key_. To encrypt messages in your Mule app using someone else’s public key, in the *Crypto Pgp* global configuration add the _receiver_ public keyring file in the *Public keyring* field. - -During PGP decryption, the receiver of the message must use its _private key_ to decrypt the contents of a message that was encrypted using a public key. To decrypt the message, in the *Crypto Pgp* global configuration add the _receiver_ private keyring file in the *Private keyring* field. - -.Crypto Pgp Global configuration with Public keyring and Private keyring fields -image::mruntime-crypto-pgp-global-config.png[Crypto Pgp Global configuration with Public keyring and Private keyring fields] - -== Understand PGP Signature Configuration - -In addition to encrypting, you can sign a message. The signature provides an integrity check of the original message. - -To create a signature, in the *Crypto Pgp* global configuration add the _sender_ private keyring file in the *Private keyring* field. - -To validate a signature, in the *Crypto Pgp* global configuration add the _sender_ public keyring file in the *Public keyring* field. - -=== PGP Signature Types and Operations - -The Cryptography module has two PGP signature operations: - -* *Pgp sign*: Creates a PGP armored signature in ASCII format. -* *Pgp sign binary*: Creates a PGP binary signature. - -In both cases, signing includes the private key of the sender, so the secret passphrase must be provided. - -There are two types of signatures: - -* Attached signature + -Generates a single document file that contains both the signature and the original document. - -* Detached signature + -Generates a single document file that contains only the signature, which is stored and transmitted separately from the document the signature signs. - -Currently, the Cryptography module supports validation of detached signatures only. - -== Understand Common Throws - -Here is a list of common throw messages and how to interpret them: - -* CRYPTO:PARAMETERS - - The operation is configured using invalid parameters. - -* CRYPTO:MISSING_KEY - - A key required for the operation was not found. - -* CRYPTO:PASSPHRASE - - The unlocking password is invalid. - -* CRYPTO:CHECKSUM - - An error occurred during an attempt to calculate a checksum. - -* CRYPTO:TRANSFORMATION - - An error occurred during an attempt to transform binary to ASCII to build the ASCII Armor file. - -* CRYPTO:VALIDATION - - The signature cannot be validated against the data. - -== See Also - -* https://help.mulesoft.com[MuleSoft Help Center] -* xref:cryptography-reference.adoc[Cryptography Module Reference] diff --git a/modules/ROOT/pages/cryptography-xml.adoc b/modules/ROOT/pages/cryptography-xml.adoc deleted file mode 100644 index cc9597dcd7..0000000000 --- a/modules/ROOT/pages/cryptography-xml.adoc +++ /dev/null @@ -1,417 +0,0 @@ -= XML Cryptography -ifndef::env-site,env-github[] -include::_attributes.adoc[] -endif::[] -:keywords: cryptography, module, sign, encrypt, xml, AES -:toc: -:toc-title: - -== Configure XML Encryption from Anypoint Studio - -To configure XML encryption from Anypoint Studio, follow these steps: - -. From the Mule palette, add *Crypto* to your project. -+ -See xref:cryptography.adoc#install-crypto-module[Install the Extension] for instructions. -. Select the desired operation, and drag the component to the flow: -+ -image::mruntime-crypto-xml-add.png[crypto-xml-add] -. Open the component properties and select an existing *Module configuration*, or create a new one by specifying the *Keystore*, *Type* (JKS, JCEKS, PKCS12) and *Password*. -+ -You can also add symmetric or asymmetric key information to be used in the sign operations: -+ -image::mruntime-crypto-jce-global-config.png[crypto-jce-global-config] -. Configure *Key selection* by using a *Key id* value previously defined in the module configuration, or define a new one for this operation: -+ -image::mruntime-crypto-xml-config.png[crypto-xml-config] -. Select *Digest Algorithm*, *Canonicalization Algorithm*, *Type*, and *Element path*. - -== Encrypting - -This example configures a keystore that contains a symmetric key that will later be used for encryption. - -.Example: JCE Configuration -[source,xml,linenums] ----- - - - - - ----- - -In the next example, the XML encrypt operation is used to encrypt a specific element of the XML document. - -.Example: Using the Encrypt Operation -[source,xml,linenums] ----- - ----- - -The `elementPath` is an XPath expression that identifies the element to encrypt. -Depending on your needs, you can use a symmetric or asymmetric key for encrypting an XML document. - -== Decrypting - -.Example: JCE Configuration -[source,xml,linenums] ----- - - - - - - ----- - -In the next example, the XML decrypt operation (`crypto:xml-decrypt`) is used to decrypt an XML document. The operation uses the asymmetric key stored in the referenced keystore. - -.Example: Using the Decrypt Operation -[source,xml,linenums] ----- - ----- - -Depending on your needs, you can use a symmetric or asymmetric key for decryption. - -== Signing - -.Example: JCE Configuration -[source,xml,linenums] ----- - - - - - ----- - -The next example uses the asymmetric key to sign an XML document by creating an XML envelope and inserting the signature inside the content that is being signed. - -.Example: Enveloped Signature -[source,xml,linenums] ----- - ----- - -In the next example, a detached XML signature is created based on an element of the XML document. Instead of being inserted in the signed content, the detached signature is returned as a separate XML element. - -.Example: Detached Signature -[source,xml,linenums] ----- - ----- - -== Validating a Signature - -.Example: JCE Configuration -[source,xml,linenums] ----- - - - - - ----- - -In the next example, the asymmetric key is used to validate the signature of the XML element specified by the `elementPath` XPath expression. - -.Example: Using the Validate Operation -[source,xml,linenums] ----- - ----- - -If the document has multiple signatures, set `elementPath` to select the signature to validate. Specify a signed element using an XPath expression to validate the signature for that element. - -== Targeting a Custom Namespace by Using elementPath - -To sign or validate an XML element that is inside a custom namespace, specify the namespace by using the XPath functions: https://developer.mozilla.org/en-US/docs/Web/XPath/Functions/namespace-uri[namespace-uri] and https://developer.mozilla.org/en-US/docs/Web/XPath/Functions/local-name[local-name]. - -For example, consider the following document: -[source,xml,linenums] ----- - - - - - - - - ----- -To target the content of `FirstElement` inside `soap:Envelope`, you specify the `xmlns:soap` namespace. The XML schema for `xmlns:soap` is defined in `http://www.w3.org/2003/05/soap-envelope/`. - -The following example shows an `xml-sign` operation configured to sign `FirstElement`: -[source,xml,linenums] ----- - ----- -Note that the `elementPath` expression specifies the `xmlns:soap` namespace. - -== Reference - -=== Module Configuration - -JCE Configuration for Java keystores and inline keys to sign or encrypt XML documents or elements. - -==== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -|Name | String | The name for this configuration. Connectors reference the configuration with this name. | | *x*{nbsp} -| Keystore a| String | +++Path to the keystore file.+++ | | {nbsp} -| Type a| Enumeration, one of: - -** `JKS` -** `JCEKS` -** `PKCS12` | +++Type of the keystore.+++ | `JKS` | {nbsp} -| Password a| String | +++Password for unlocking the keystore.+++ | | {nbsp} -| Jce Key Infos a| Array of One of: - -* <> -* <> | +++List of keys to be considered, with internal IDs for referencing them.+++ | | {nbsp} -| Expiration Policy a| <> | +++Configures the minimum amount of time that a dynamic configuration instance can remain idle before the runtime considers it eligible for expiration. This does not mean that the platform will expire the instance at the exact moment that it becomes eligible. The runtime will actually purge the instances when it sees it fit.+++ | | {nbsp} -|=== - -[[xmlDecrypt]] -== Xml Decrypt Operation -`` - -+++ -Decrypts the XML document. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Content a| Binary | +++the document to decrypt+++ | `#[payload]` | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | +++Configure if repeatable streams should be used and their behavior+++ | | {nbsp} -| Key Id a| String | +++The key ID, as defined in the JCE configuration.+++ | | {nbsp} -| Jce Key Info a| One of: - -* <> -* <> | +++An inline key definition.+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:KEY` {nbsp} -* `CRYPTO:PASSPHRASE` {nbsp} -* `CRYPTO:PARAMETERS` {nbsp} -* `CRYPTO:DECRYPTION` {nbsp} - - -[[xmlEncrypt]] -== Xml Encrypt Operation -`` - -+++ -Encrypt the XML document. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Content a| Binary | +++the document to encrypt+++ | `#[payload]` | {nbsp} -| Algorithm a| Enumeration, one of: - -** `AES_CBC` -** `AES_GCM` -** `TRIPLEDES` | +++the algorithm for encryption+++ | `AES_CBC` | {nbsp} -| Element Path a| String | +++the path to the element to encrypt, if empty the whole document is considered+++ | | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | +++Configure if repeatable streams should be used and their behavior+++ | | {nbsp} -| Key Id a| String | +++The key ID, as defined in the JCE configuration.+++ | | {nbsp} -| Jce Key Info a| One of: - -* <> -* <> | +++An inline key definition.+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:ENCRYPTION` {nbsp} -* `CRYPTO:KEY` {nbsp} -* `CRYPTO:PARAMETERS` {nbsp} - - -[[xmlSign]] -== Xml Sign Operation -`` - -+++ -Sign an XML document. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Content a| Binary | +++the XML document to sign+++ | `#[payload]` | {nbsp} -| Digest Algorithm a| Enumeration, one of: - -** `RIPEMD160` -** `SHA1` -** `SHA256` -** `SHA512` | +++the hashing algorithm for signing+++ | `SHA256` | {nbsp} -| Canonicalization Algorithm a| Enumeration, one of: - -** `EXCLUSIVE` -** `EXCLUSIVE_WITH_COMMENTS` -** `INCLUSIVE` -** `INCLUSE_WITH_COMMENTS` | +++the canonicalization method for whitespace and namespace unification+++ | `EXCLUSIVE` | {nbsp} -| Type a| Enumeration, one of: - -** `DETACHED` -** `ENVELOPED` -** `ENVELOPING` | +++the type of signature to create+++ | `ENVELOPED` | {nbsp} -| Element Path a| String | +++for internally detached signatures, an unambiguous XPath expression resolving to the element to sign+++ | | {nbsp} -| Streaming Strategy a| * <> -* <> -* non-repeatable-stream | +++Configure if repeatable streams should be used and their behavior+++ | | {nbsp} -| Key Id a| String | +++The key ID, as defined in the JCE configuration.+++ | | {nbsp} -| Jce Key Info a| One of: - -* <> -* <> | +++An inline key definition.+++ | | {nbsp} -| Target Variable a| String | +++The name of a variable on which the operation's output will be placed+++ | | {nbsp} -| Target Value a| String | +++An expression that will be evaluated against the operation's output and the outcome of that expression will be stored in the target variable+++ | `#[payload]` | {nbsp} -|=== - -=== Output Type - -Binary - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:KEY` {nbsp} -* `CRYPTO:PASSPHRASE` {nbsp} -* `CRYPTO:SIGNATURE` {nbsp} - - -[[xmlValidate]] -== Xml Validate Operation -`` - -+++ -Validate an XML signed document. -+++ - -=== Parameters -[cols=".^20%,.^20%,.^35%,.^20%,^.^5%", options="header"] -|=== -| Name | Type | Description | Default Value | Required -| Configuration | String | The name of the configuration to use. | | *x*{nbsp} -| Content a| Binary | +++Specifies the document to verify (includes the signature).+++ | `#[payload]` | {nbsp} -| Element Path a| String | +++For internally detached signatures, an unambiguous XPath expression that resolves to the signed element.+++ | | {nbsp} -| Use inline certificate if present a| Boolean | +++Specify whether or not to validate the signature against a certificate contained in the +++`ds:Signature`+++ element, if the certificate is present.+++ | `"false"` | {nbsp} -| Key Id a| String | +++Specifies the key ID, as defined in the JCE configuration.+++ | | {nbsp} -| Jce Key Info a| One of: - -* <> -* <> | +++An inline key definition.+++ | | {nbsp} -|=== - -=== Throws -* `CRYPTO:MISSING_KEY` {nbsp} -* `CRYPTO:PARAMETERS` {nbsp} -* `CRYPTO:VALIDATION` {nbsp} - -== Types Definition -[[ExpirationPolicy]] -=== Expiration Policy - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Max Idle Time a| Number | A scalar time value for the maximum amount of time a dynamic configuration instance should be allowed to be idle before it's considered eligible for expiration | | -| Time Unit a| Enumeration, one of: - -** `NANOSECONDS` -** `MICROSECONDS` -** `MILLISECONDS` -** `SECONDS` -** `MINUTES` -** `HOURS` -** `DAYS` | A time unit that qualifies the maxIdleTime attribute | | -|=== - -[[repeatable-in-memory-stream]] -=== Repeatable In Memory Stream - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Initial Buffer Size a| Number | This is the amount of memory that will be allocated in order to consume the stream and provide random access to it. If the stream contains more data than can be fit into this buffer, then it will be expanded by according to the `bufferSizeIncrement` attribute, with an upper limit of `maxInMemorySize`. | | -| Buffer Size Increment a| Number | This is by how much will be buffer size by expanded if it exceeds its initial size. Setting a value of zero or lower will mean that the buffer should not expand, meaning that a `STREAM_MAXIMUM_SIZE_EXCEEDED` error will be raised when the buffer gets full. | | -| Max Buffer Size a| Number | This is the maximum amount of memory that will be used. If more than that is used then a `STREAM_MAXIMUM_SIZE_EXCEEDED` error will be raised. A value lower or equal to zero means no limit. | | -| Buffer Unit a| Enumeration, one of: - -** `BYTE` -** `KB` -** `MB` -** `GB` | The unit in which all these attributes are expressed | | -|=== - -[[repeatable-file-store-stream]] -=== Repeatable File Store Stream - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Max In Memory Size a| Number | Defines the maximum memory that the stream should use to keep data in memory. If more than that is consumed then it will start to buffer the content on disk. | | -| Buffer Unit a| Enumeration, one of: - -** `BYTE` -** `KB` -** `MB` -** `GB` | The unit in which maxInMemorySize is expressed | | -|=== - -[[JceAsymmetricKeyInfo]] -=== Jce Asymmetric Key Info - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Key Id a| String | Internal key ID for referencing from operations. | | x -| Alias a| String | Alias of the key in the keystore. | | x -| Password a| String | Password used to unlock the private part of the key. | | -|=== - -[[JceSymmetricKeyInfo]] -=== Jce Symmetric Key Info - -[cols=".^20%,.^25%,.^30%,.^15%,.^10%", options="header"] -|=== -| Field | Type | Description | Default Value | Required -| Key Id a| String | Internal key ID for referencing from operations. | | x -| Alias a| String | Alias of the key in the keystore. | | x -| Password a| String | Password used to unlock the key. | | x -|=== diff --git a/modules/ROOT/pages/cryptography.adoc b/modules/ROOT/pages/cryptography.adoc deleted file mode 100644 index 3136d1577f..0000000000 --- a/modules/ROOT/pages/cryptography.adoc +++ /dev/null @@ -1,33 +0,0 @@ -= About the Cryptography Module -ifndef::env-site,env-github[] -include::_attributes.adoc[] -endif::[] -:keywords: cryptography, module, sign, encrypt, pgp, jce, AES - -This module provides cryptography capabilities to a Mule application. Its main features include: - -* Symmetric encryption and decryption of messages. -* Asymmetric encryption and decryption of messages. -* Message signing and signature validation of signed messages. - -This module supports three different strategies to encrypt and sign your messages: - -* xref:cryptography-pgp.adoc[PGP]: Signature/encryption using PGP. -* xref:cryptography-xml.adoc[XML]: For signing or encrypting XML documents or elements. -* xref:cryptography-jce.adoc[JCE]: For using a wider range of cryptography capabilities as provided by the Java Cryptography Extension. - -Additionally, this module offers two general operations to calculate and validate stream checksums. + -See xref:cryptography-reference.adoc[General Operations] for more information. - -== Using the Extension in Anypoint Studio 7 - -You can use this extension by adding it as a dependency in your Mule app. - -[[install-crypto-module]] -=== Installing the Extension - -. Open your Mule project in Anypoint Studio. -. Go to the Mule Palette. -. Select **Search in Exchange**, and search for the Cryptography Module. -. Add the extension. -. You can now search in the mule Palette for operations of the Cryptography module. diff --git a/modules/ROOT/pages/custom-configuration-properties-provider.adoc b/modules/ROOT/pages/custom-configuration-properties-provider.adoc index a55fb597b6..c55ffe9247 100644 --- a/modules/ROOT/pages/custom-configuration-properties-provider.adoc +++ b/modules/ROOT/pages/custom-configuration-properties-provider.adoc @@ -174,7 +174,7 @@ You can download or checkout the https://github.com/mulesoft/mule-custom-properties-providers-module-example[sample project], which contains all the infrastructure code to get started implementing your custom configuration properties resolver extension. -The sample project is a Mule SDK module. See https://docs.mulesoft.com/mule-sdk/1.1/getting-started[Getting started with the Mule SDK] for additional information. +The sample project is a Mule SDK module. See xref:mule-sdk::getting-started.adoc[Getting started with the Mule SDK] for additional information. === Customizing the Module to Access Your Custom Properties Source @@ -184,20 +184,17 @@ Follow these steps to customize the Mule SDK Module: . Open the `pom.xml` file: .. Define the GAV (`groupId`, `artifactId`, and `version`) of your module. .. Define the `name` of your module. +.. Review the minimum Mule version you want your properties provider to require. This example is compatible with Mule 4.1.1 and later to cover all possible scenarios. See xref:mule-sdk::choosing-version.adoc[Choosing the SDK Version]. . Change the package name (`com.my.company.custom.provider.api`) of your code. -. Open `resources/META-INF/mule-artifact/mule-artifact.json`: -.. Set the `type` field with value `com.my.company.custom.provider.api.CustomConfigurationPropertiesExtensionLoadingDelegate`, replacing `com.my.company.custom.provider.api` to match the package name you changed previously. -.. Set the `name` field using the name you want to define for the module. -.. Set the `exportedPackages` field to match the package name you changed previously. . Open `resources/META-INF/services/org.mule.runtime.config.api.dsl.model.properties.ConfigurationPropertiesProviderFactory`, and change the content to match the package name you changed previously. -. Open the `CustomConfigurationPropertiesExtensionLoadingDelegate` class: +. Open the `CustomConfigurationPropertiesExtension` class: .. Change the `EXTENSION_NAME` constant to the name of your module. -.. Change the `fromVendor` method parameter to your company name. +.. Change the `vendor` method parameter on the `@Extension` annotation to your company name. .. Customize the section at the end to define the parameters that can be configured in the `config` element of your module. . Open the `CustomConfigurationPropertiesProviderFactory` class: .. Change the `CUSTOM_PROPERTIES_PREFIX` value to a meaningful prefix for the configuration properties that your module must resolve. .. Change the class implementation to look up the properties from your custom source. -. Update `CustomPropertiesProviderOperationsTestCase` with more test cases to cover your new module functionality. +. Update the MUnit test cases in `test/munit` with more test cases to cover your new module functionality. Once your module is ready, you can install it locally using `mvn clean install` to make the module accessible from Studio. @@ -220,4 +217,7 @@ You can now configure your new component and start using properties with the pre == Using Custom Configuration Properties Provider versus a Connector -For static properties, use the properties provider approach, because static properties do not change during runtime. If your properties might change during runtime, create a connector that can provide the value as one of its operations. +For static properties, use the properties provider approach, because static properties don't change during runtime. If your properties might change during runtime, create a connector that can provide the value as one of its operations. + +[NOTE] +Although the xref:mule-sdk::getting-started.adoc[Mule SDK for Java] is used to implement a custom configuration properties provider, you can't use xref:mule-sdk::connections.adoc[ConnectionProviders] to connect to external sources. This is because `ConnectionProviders` are managed by Mule runtime and are initialized at a later stage, after the static properties are resolved. Consequently, connectivity testing isn't supported for custom properties providers. \ No newline at end of file diff --git a/modules/ROOT/pages/deploy-on-premises.adoc b/modules/ROOT/pages/deploy-on-premises.adoc index 7eff59c299..be40961448 100644 --- a/modules/ROOT/pages/deploy-on-premises.adoc +++ b/modules/ROOT/pages/deploy-on-premises.adoc @@ -334,7 +334,8 @@ Updating a Mule application at runtime can be a complex change involving class m There are two ways you can update an application: -* By adding the modifications over an existing unpacked app folder and touching the main configuration file (`mule-config.xml` located in the app's root directory by default). +* By adding the modifications over an existing unpacked app folder and touching the main configuration file (`mule-config.xml` located in the app's root directory by default). + +For this option to be valid, start the runtime with the system property `-M-Dmule.deployment.forceParseConfigXmls=true`. * By adding a new `jar` with an updated version of the app into the `$MULE_HOME/apps` directory. Mule detects the `jar` as an updated version of an existing application and ensures the update by a clean redeployment of the app. + Note that Mule discards any modifications to the old application folder. The new app folder is a clean unpacked application from a `jar`. diff --git a/modules/ROOT/pages/deploy-to-cloudhub.adoc b/modules/ROOT/pages/deploy-to-cloudhub.adoc index ea7dbe8ecd..907b5b4a46 100644 --- a/modules/ROOT/pages/deploy-to-cloudhub.adoc +++ b/modules/ROOT/pages/deploy-to-cloudhub.adoc @@ -98,7 +98,10 @@ The following table shows the available parameters to configure the CloudHub dep If not set, by default this value is set to +https://anypoint.mulesoft.com+. | No | `muleVersion` | The Mule runtime engine version to run in your CloudHub instance. + Ensure that this value is equal to or higher than the earliest required Mule version of your application. + -Example values: `4.3.0`, `4.2.2-hf4` | Yes +Example value: `4.3.0` + + +Starting with Mule 4.5, deployments to CloudHub require Major.Minor version, which deploys the latest version of Mule runtime. +Example value: `4.5`| Yes | `username` | Your CloudHub username | Only when using Anypoint Platform credentials to login. | `password` | Your CloudHub password | Only when using Anypoint Platform credentials to login. | `applicationName` | The name of your application in CloudHub + @@ -156,7 +159,8 @@ include::mule-runtime::partial$mmp-concept.adoc[tag=connectedAppsParameterDescri | `applyLatestRuntimePatch` | When set to `true`, the plugin instructs CloudHub to update the worker to the latest available patch for the Mule runtime engine version specified in the deployment configuration, and then deploys the application. + By default, it is set to `false`. | No | `disableCloudHubLogs` | When set to `true`, the plugin instructs CloudHub to disable CloudHub logging and instead use the application configured in the `log4j2.xml` file. + -By default, it is set to `false`. | No +By default, it is set to `false`. + +This parameter is available in plugin version 3.8.1 and later. | No |=== == Encrypt Credentials diff --git a/modules/ROOT/pages/deploying.adoc b/modules/ROOT/pages/deploying.adoc index 5e0ae1cf61..0cea02ddfd 100644 --- a/modules/ROOT/pages/deploying.adoc +++ b/modules/ROOT/pages/deploying.adoc @@ -18,16 +18,19 @@ In addition, different tools are available to deploy applications to each of the |=== |Deployment Target | Available Deployment Tools | Mule Runtime Engine Installation |CloudHub | -* Anypoint Studio -* Anypoint Runtime Manager +* Anypoint Code Builder * Anypoint Platform CLI +* Anypoint Runtime Manager +* Anypoint Studio * Mule Maven plugin | * No installation of Mule runtime engine is required, because CloudHub workers start Mule instances as part of the deployment process. |CloudHub 2.0 | -* Anypoint Runtime Manager +* Anypoint Code Builder * Anypoint Platform CLI +* Anypoint Runtime Manager +* Anypoint Studio * Mule Maven plugin | * No installation of Mule runtime engine is required, because CloudHub 2.0 replicas start Mule instances as part of the deployment process. @@ -41,10 +44,10 @@ In addition, different tools are available to deploy applications to each of the * Installation of Anypoint Runtime Fabric in your desired infrastructure is required |On-premises | -* Anypoint Studio +* Anypoint Platform CLI * Anypoint Runtime Manager +* Anypoint Studio * Runtime Manager in Anypoint Platform Private Cloud Edition -* Anypoint Platform CLI * Mule Maven plugin | * Installation of Mule runtime engine in your desired infrastructure is required. @@ -52,6 +55,9 @@ In addition, different tools are available to deploy applications to each of the |=== +[NOTE] +After End of Extended Support for Mule runtime 4.4, Mule applications deployed to CloudHub or CloudHub 2.0 environments are stopped. + == See Also * xref:runtime-manager::cloudhub.adoc[CloudHub] diff --git a/modules/ROOT/pages/distributed-locking.adoc b/modules/ROOT/pages/distributed-locking.adoc index 0e44751d15..620eed32e2 100644 --- a/modules/ROOT/pages/distributed-locking.adoc +++ b/modules/ROOT/pages/distributed-locking.adoc @@ -4,7 +4,7 @@ include::_attributes.adoc[] endif::[] :keywords: distributed locking, cluster -Mule runtime engine provides the ability to create locks for synchronizing access to resources within Mule components. To manage concurrent access to resources, Mule provides a lock factory that you can access programmatically by scripts or in custom extensions built with the xref:1.1@mule-sdk::getting-started.adoc[Java SDK]. +Mule runtime engine provides the ability to create locks for synchronizing access to resources within Mule components. To manage concurrent access to resources, Mule provides a lock factory that you can access programmatically by scripts or in custom extensions built with the xref:mule-sdk::getting-started.adoc[Java SDK]. Any locks you create with the Mule lock factory work seamlessly on deployment models that use either a single server or a cluster of servers. This means that if you have a server running the same flow in multiple threads, or a cluster environment running the same app, you can guarantee resource synchronization with Mule locks. Additionally, the Mule locking system offers a simple API to access shared locks. diff --git a/modules/ROOT/pages/execution-engine.adoc b/modules/ROOT/pages/execution-engine.adoc index df13d91ccd..f3f10a87e3 100644 --- a/modules/ROOT/pages/execution-engine.adoc +++ b/modules/ROOT/pages/execution-engine.adoc @@ -27,7 +27,7 @@ See specific component or module documentation to learn the processing type it s For connectors created with the Mule SDK, the SDK determines the most appropriate processing type based on how the connector is implemented. For -details on that mechanism, refer to the xref:1.1@mule-sdk::index.adoc[Mule SDK documentation]. +details on that mechanism, refer to the xref:mule-sdk::index.adoc[Mule SDK documentation]. [[threading]] == Threading diff --git a/modules/ROOT/pages/feature-flagging.adoc b/modules/ROOT/pages/feature-flagging.adoc index fadd231315..42825ebb4b 100644 --- a/modules/ROOT/pages/feature-flagging.adoc +++ b/modules/ROOT/pages/feature-flagging.adoc @@ -119,7 +119,7 @@ The following table shows the available feature flags, a description of their fu *Enabled by Default Since* -* Not enabled by default in any Mule version. +* 4.4.0 *Issue ID* @@ -356,6 +356,22 @@ Suppressed errors are treated as underlying causes that can also be matched by O *Issue ID* * W-11855052 + +<.^|`mule.forkJoin.completeChildContextsOnTimeout` +|When enabled, the processors that perform fork and join work (currently, Scatter-Gather and Parallel For Each routers) complete the child event contexts when a timeout occurs. + +*Available Since* + +* 4.4.0-20250217 + +*Enabled by Default Since* + +* Not enabled by default in any Mule version + +*Issue ID* + +* W-16941297 + |=== == See Also diff --git a/modules/ROOT/pages/fips-140-2-compliance-support.adoc b/modules/ROOT/pages/fips-140-2-compliance-support.adoc index c81bb59363..cfb6398b46 100644 --- a/modules/ROOT/pages/fips-140-2-compliance-support.adoc +++ b/modules/ROOT/pages/fips-140-2-compliance-support.adoc @@ -4,14 +4,14 @@ include::_attributes.adoc[] endif::[] :keywords: fips, certifications, security -The Mule 4 Runtime can be configured to run in a FIPS 140-2 certified environment. This includes all Runtime connectors, such as HTTP connector. Note that Mule does not run in FIPS security mode by default. There are two requirements: +The Mule 4 Runtime can be configured to run in a FIPS 140-2 certified environment. This includes all Runtime connectors, such as HTTP connector. Note that Mule doesn't run in FIPS security mode by default. There are two requirements: * Have a certified cryptography module installed in your Java environment * Adjust Mule Runtime settings to run in FIPS security mode [NOTE] -- -By default, Government Cloud is configured for FIPS 140-2, so you do not need to perform the following steps if you are using Government Cloud. +By default, Government Cloud is configured for FIPS 140-2, so you don't need to perform the following steps if you are using Government Cloud. If you are using Runtime Fabric, see xref:runtime-fabric::enable-fips-140-2-compliance.adoc[Enabling FIPS 140-2 Compliance Mode for Runtime Fabric] instead of performing these steps. -- @@ -23,7 +23,7 @@ This document assumes that you are familiar with http://csrc.nist.gov/publicatio [[set_up_environment]] == Setting Up a FIPS 140-2 Java Environment -Mule relies on the Java runtime to provide a FIPS-compliant security module, which is why the first requirement is to have a FIPS 140-2 Java environment properly set up. If you are setting up your system for FIPS compliance for the first time and you have not already configured a certified security provider, you must first https://csrc.nist.gov/projects/cryptographic-module-validation-program/validated-modules[select and obtain one], then set up your Java environment following the instructions specific to your selected provider. +Mule relies on the Java runtime to provide a FIPS-compliant security module, which is why the first requirement is to have a FIPS 140-2 Java environment properly set up. If you are setting up your system for FIPS compliance for the first time and you haven't already configured a certified security provider, you must first https://csrc.nist.gov/projects/cryptographic-module-validation-program/validated-modules[select and obtain one], then set up your Java environment following the instructions specific to your selected provider. Details for this process vary according to your selected security provider. Please refer to the documentation for your security provider for complete instructions. @@ -89,28 +89,68 @@ To configure the cipher suite used by on-prem Mule installations, see < + + + +---- + +[NOTE] +If the source keystore is `PKCS12`, set the parameter `-srcstoretype` to `PKCS12` in the `keytool` command. + == Fine-Tuning SSL Connectors The Mule conf folder includes two files that allow you to fine-tune the configuration of SSL connectors by manually setting which cipher suites Mule can use and which SSL protocols are allowed: -* `tls-default.conf` (Allows fine-tuning when Mule is not configured to run in FIPS security mode) +* `tls-default.conf` (Allows fine-tuning when Mule isn't configured to run in FIPS security mode) * `tls-fips140-2.conf` (Allows fine-tuning when Mule is running in FIPS security mode) Open the relevant file and comment or uncomment items in the lists to manually configure the allowed cipher suites and SSL protocols. If you make no changes to these files, Mule allows the configured security manager to select cipher suites and protocols. == Tips and Limitations -* The Bouncy Castle security provider bundled with the Mule Runtime distribution is not FIPS certified. When Mule starts in FIPS security mode, the Bouncy Castle provider is not registered or used. -* Not all encryption schemes and signatures included in xref:cryptography.adoc[Mule Cryptography Module] and xref:secure-configuration-properties.adoc[Mule Secure Properties] configuration options are FIPS compliant. If your application is using an algorithm that is not approved for FIPS use, you will get an error at runtime that reads: +* The Bouncy Castle security provider bundled with the Mule Runtime distribution isn't FIPS certified. When Mule starts in FIPS security mode, the Bouncy Castle provider isn't registered or used. +* Not all encryption schemes and signatures included in xref:securing.adoc#cryptography-module[Mule Cryptography Module] and xref:secure-configuration-properties.adoc[Mule Secure Properties] configuration options are FIPS compliant. If your application is using an algorithm that isn't approved for FIPS use, you will get an error at runtime that reads: .... Could not find encryption algorithm ''. You are running in FIPS mode, so please verify that the algorithm is compliant with FIPS. .... * Keep in mind that your different environments might have different security configurations, including different encryption schemes and algorithm selections. So you might see this error in certain environments (but not others), depending on how they are set up. +* Similarly, enabling FIPS at the OS level, such as on Red Hat, isn't supported as it causes cipher suite errors during license validation. == See Also -https://csrc.nist.gov/projects/cryptographic-module-validation-program/validated-modules[Validated FIPS-2 Cryptographic Modules] - -http://csrc.nist.gov/publications/fips/fips140-2/fips1402annexa.pdf[Approved Cryptographic Algorithms] +* https://csrc.nist.gov/projects/cryptographic-module-validation-program/validated-modules[Validated FIPS-2 Cryptographic Modules] +* http://csrc.nist.gov/publications/fips/fips140-2/fips1402annexa.pdf[Approved Cryptographic Algorithms] diff --git a/modules/ROOT/pages/flow-component.adoc b/modules/ROOT/pages/flow-component.adoc index 57c02d9093..719fb7d1ee 100644 --- a/modules/ROOT/pages/flow-component.adoc +++ b/modules/ROOT/pages/flow-component.adoc @@ -8,13 +8,13 @@ endif::[] toc::[] -//Anypoint Studio, Design Center connector + [[short_description]] Flow and Subflow scopes are components for grouping together a sequence of other Core components and operations (provided by connectors and modules) to help automate integration processes. The Flow component is fundamental to a Mule app. Because all Mule apps must contain at least one flow, Anypoint -Studio and Flow Designer automatically provide the first Flow component in +Studio automatically provides the first Flow component in your Mule app. A Mule app can contain additional flows and subflows, as this example shows: @@ -51,7 +51,7 @@ Set `maxConcurrency` to `1` to cause the flow to process requests one at a time. See xref:execution-engine.adoc#backpressure[Back-pressure] for details about Mule's behavior after the the maximum concurrency value is reached. | Business Events a| Optional: Defaults to `false`. For Mule apps that you deploy to CloudHub, you can enable business events (XML example: `tracking:enable-default-event="true"`) and add a Transaction ID (XML example: `). See xref::business-events.adoc[Business Events]. -| Metadata | As with many other components, you can set metadata for this component. For more on this topic, see the Studio document xref:7.1@studio::metadata-editor-concept.adoc[Metadata Editor]. +| Metadata | As with many other components, you can set metadata for this component. For more on this topic, see the Studio document xref:studio::metadata-editor-concept.adoc[Metadata Editor]. |=== == Subflow Configuration @@ -62,7 +62,7 @@ Subflow scopes provide a way to edit the name of the subflow and to add metadata |=== | Field | Description | Name (`name`) | Name for the subflow. Subflows automatically receive an editable name that matches (or partially matches) the project name. -| Metadata | As with many other components, you can set up metadata for this component. For more on this topic, see the Studio document xref:7.1@studio::metadata-editor-concept.adoc[Metadata Editor]. +| Metadata | As with many other components, you can set up metadata for this component. For more on this topic, see the Studio document xref:studio::metadata-editor-concept.adoc[Metadata Editor]. |=== == XML for Flows and Subflows diff --git a/modules/ROOT/pages/for-each-scope-concept.adoc b/modules/ROOT/pages/for-each-scope-concept.adoc index 1236ac7a9f..1ed2481139 100644 --- a/modules/ROOT/pages/for-each-scope-concept.adoc +++ b/modules/ROOT/pages/for-each-scope-concept.adoc @@ -31,7 +31,7 @@ Note that if the input contains information outside the collection you tell it t You can also split an array into batches to enable quicker processing. Each batch is treated as a separate Mule message. For example, if a collection has 200 elements and you set *Batch Size* to `50`, the For Each scope iteratively processes 4 batches of 50 elements, each as a separate Mule message. -=== Example XML +== Example XML This is an example XML based on the For Each scope configuration detailed above: [source,xml,linenums] @@ -102,11 +102,11 @@ To download and open an example project while you are in Anypoint Studio, click For Each scopes open and close with a `` tag. Components that are affected by this scope are defined as child elements of the `` tag. -=== Configurable Properties +=== Configurable Variables [%header,cols="35,20,45"] |=== -|Property | Default | Description +|Variable | Default | Description | `collection` | `payload` | An expression that returns a Java collection, object array, map, or DOM @@ -114,7 +114,7 @@ For Each scopes open and close with a `` tag. Components that are affec | `counterVariableName` | `counter` -| Name of the property that stores the number of messages over which it iterates. +| Name of the variable that stores the number of messages over which it iterates. | `batchSize` | `1` @@ -123,7 +123,7 @@ For Each scopes open and close with a `` tag. Components that are affec | `rootMessageVariableName` | `rootMessage` -| Name of the property that stores the parent message. The parent is the complete, non-split message. +| Name of the variable that stores the parent message. The parent is the complete, non-split message. |=== diff --git a/modules/ROOT/pages/hadr-guide.adoc b/modules/ROOT/pages/hadr-guide.adoc index 9cc6300149..f9d2059751 100644 --- a/modules/ROOT/pages/hadr-guide.adoc +++ b/modules/ROOT/pages/hadr-guide.adoc @@ -122,7 +122,7 @@ There are two or more Mule environments, however they are part of the same clust |None - There is no service downtime. |=== -== High-Availability Deployment Models +== High-Availability for On-Premises Deployment Models * <> * <> diff --git a/modules/ROOT/pages/hardware-and-software-requirements.adoc b/modules/ROOT/pages/hardware-and-software-requirements.adoc index 4c0e3839fa..db8e693a5a 100644 --- a/modules/ROOT/pages/hardware-and-software-requirements.adoc +++ b/modules/ROOT/pages/hardware-and-software-requirements.adoc @@ -14,7 +14,7 @@ If you plan to install Mule and run it on premises, review these minimum hardwar Adjust RAM to match your latency requirements and the size and number of simultaneous messages that applications process. -Mule supports the x86 and x64 architectures but does not yet support the ARM architecture. +Mule supports the x86 and x64 architectures. == Required Software @@ -26,6 +26,8 @@ Verify that you use a supported version of Java before you install Mule. | JDK | JDK 1.8.0 or JDK 11 |=== +[NOTE] +Though you can run a different JDK of choice, MuleSoft doesn't support or take action to fix issues if they are traced back to the JDK. == Supported Software @@ -39,7 +41,7 @@ The Mule runtime engine passed functional testing against the following software |=== |Software |Version | OS | MacOS 10.15, HP-UX 11i V3, AIX 7.2, Windows Server 2019, Windows 10, Solaris 11.3, Red Hat Enterprise Linux 8.8, Ubuntu Server 20.04 -| JDK | JDK 1.8.0, JDK 11 +| JDK | Adoptium OpenJDK distribution |=== This version of Mule runtime engine is bundled with the Runtime Manager agent plugin version 2.4.21. For Runtime Manager Agent compatibility, see xref:release-notes::runtime-manager-agent/runtime-manager-agent-release-notes.adoc[Runtime Manager Agent Release Notes]. diff --git a/modules/ROOT/pages/http-connection-handling.adoc b/modules/ROOT/pages/http-connection-handling.adoc new file mode 100644 index 0000000000..7656d5b1cf --- /dev/null +++ b/modules/ROOT/pages/http-connection-handling.adoc @@ -0,0 +1,77 @@ += Understanding HTTP Connection Handling during Mule Runtime Shutdown + +Mule runtime engine (Mule) manages HTTP connections during shutdown through a two-stage graceful shutdown process followed by a forceful termination. + +This process first refuses new connections while allowing existing ones to complete with a `Connection: close` signal, then rejects new requests entirely while finishing in-flight processing, before finally forcefully terminating all remaining connections upon timeout. + +This supports seamless scale-down scenarios, enabling the reduction of Mule replicas while maintaining the stability and reliability of the application by progressively draining traffic. + +== HTTP Connection Handling Diagram + +This diagram illustrates the behavior of HTTP connection handling during Mule shutdown: + +image::runtime-http-connections-diagram.png[] + +Phase 1: Mule Started + +* Mule State: Mule started and is running normally. +* HTTP Behavior: All incoming HTTP traffic is routed to Mule and processed without any special handling. + +Transition: Send Stop Signal to Mule + +* The shutdown sequence starts by sending a stop signal to the Mule instance. + +Phase 2: HTTP Graceful Shutdown - Mule Stopping + +* Trigger: Reception of the stop signal. +* Duration: Up to the configured HTTP Graceful Timeout (default: 5 seconds). However, if all existing HTTP connections are closed before this timeout period elapses, the HTTP Graceful Shutdown phase will end early, and Mule won't wait for the full timeout duration. +* Mule State: Mule is initiating a shutdown or transitioning to stopped. +* HTTP Behavior: + +** No new connections: The HTTP server stops accepting new incoming TCP connections (the acceptor socket is closed). +** Existing and In-Flight requests: Ongoing HTTP requests (both those already being processed in-flight and those on existing, established connections) will continue to be handled normally. However, all responses will include the `Connection: close` header, signaling to the HTTP client to close the connection after the current request-response cycle. +** Early termination: If there are no active HTTP connections when the stop signal is received, the HTTP Graceful Shutdown phase will end immediately, even before the timeout period expires. +** Late-Arriving requests: If an HTTP request arrives just before the HTTP Graceful Timeout is reached, it'll still be processed normally, even if the processing extends beyond the HTTP Graceful Shutdown timeout. + +Transition: HTTP Graceful Shutdown Timeout Elapsed + +* The configured duration for the HTTP Graceful Shutdown has ended. + +Phase 3: Mule Graceful Shutdown - Mule Stopping + +* Trigger: Expiration of the HTTP Graceful Shutdown Timeout. +* Duration: Up to the configured Mule Graceful Shutdown Timeout (default: 5 seconds). +* Mule State: Mule is shutting down gracefully +* HTTP Behavior: + +** Reject new requests (503): Any new HTTP requests arriving at the runtime will be immediately rejected with a 503 Service Unavailable error. +** Reject new Requests (404): Once the Mule app code unregisters the HTTP endpoint from the underlying HTTP server, subsequent new HTTP requests will receive a 404 Not Found error. +** Complete In-Flight messages: HTTP requests that were already fully read and are being processed (in-flight messages) will be allowed to complete their execution normally. + +Transition: Mule Graceful Shutdown Timeout Elapsed + +* The configured duration for the overall Mule Graceful Shutdown has ended. + +Phase 4: Mule Stopped + +* Trigger: Expiration of the Mule Graceful Shutdown Timeout. +* Mule State: Mule stopped. +* Behavior: The Mule process proceeds to a forceful shutdown and will eventually exit. As a result: + +** All remaining open HTTP connections will be abruptly closed. +** Clients connected to these forcibly closed connections will likely receive a connection reset error or a broken pipe error, depending on the TCP status of their socket at the time of closure. + +== Configure Shutdown Timeout + +Note that while the HTTP Graceful Shutdown Timeout and the overall Mule Graceful Shutdown Timeout serve distinct purposes in the shutdown sequence, you configure them using the same `shutdownTimeout` parameter within the `` element of your Mule application's XML file. + +Specify the value in milliseconds (default 5000 milliseconds), which controls the maximum duration for both graceful shutdown periods. For example: + +[source,xml,linenums] +---- + + +---- + +For details, refer to xref:global-settings-configuration.adoc#global-configurations-reference[Global Configurations Reference] + diff --git a/modules/ROOT/pages/index.adoc b/modules/ROOT/pages/index.adoc index 62027504c0..1dd3d271c6 100644 --- a/modules/ROOT/pages/index.adoc +++ b/modules/ROOT/pages/index.adoc @@ -1,4 +1,4 @@ -= Mule Overview += Mule Runtime Engine Overview ifndef::env-site,env-github[] include::_attributes.adoc[] endif::[] @@ -16,7 +16,7 @@ connectivity instead of point-to-point integrations. Mule applications provide functionality for message routing, data mapping, orchestration, reliability, security, and scalability. -Anypoint Studio and Flow Designer support Mule application development. +Anypoint Studio supports Mule application development. == Mule Domains @@ -61,31 +61,7 @@ Mule is packaged within the xref:studio::index.adoc[Studio] IDE and with xref:design-center::index.adoc[Design Center] on Anypoint Platform so that you can run a Mule app as you design it. -For production and pre-production deployments of Mule apps, you can use -xref:runtime-manager::index.adoc[Runtime Manager] to deploy Mule apps -to runtimes within CloudHub and other supported platform as a service (PaaS) -solutions. - -* xref:runtime-manager::cloudhub.adoc[CloudHub] is a fully managed, cloud-based -integration platform as a service (iPaaS) for Anypoint Platform that enables you -to run your Mule apps without requiring you to provide Mule runtime engines or the -infrastructure on which your apps run. You use Runtime Manager to deploy Mule -apps to CloudHub, select the Mule version, set the number of vCores -needed to run the app, and so on. -* Hybrid deployment models manage Mule apps and runtimes from the Cloud while -running them in a datacenter that is managed by your company: -** For remote Mule runtimes (also called standalone or "naked" Mules), you -start Mule runtimes from your datacenter, but you can deploy and manage -Mule apps from the Cloud, through Runtime Manager. In this deployment model, you -provide the infrastructure and Mule runtime (see -xref:mule-standalone.adoc[Run Mule Runtime Engine On-Premises]). -** For a hybrid PaaS deployment, you set up and run the PaaS on your company's -datacenter and use Runtime Manager to manage Mule apps within the PaaS. In this -case, you provision the infrastructure in which the apps run. To guarantee the -high availability of Mule apps, you use Runtime Manager to handle Mule runtimes. -MuleSoft also provides the built-in PaaS -solution, xref:runtime-fabric::index.adoc[Runtime Fabric], which runs Mule -runtime engines in a "containerized" environment. +include::cloudhub-2::partial$index-runtime-plane-hosting-options.adoc[tag=runtimePlaneHostingOptions] In addition to using Runtime Manager, you can perform deployments and manage Mule apps with diff --git a/modules/ROOT/pages/installing-an-enterprise-license.adoc b/modules/ROOT/pages/installing-an-enterprise-license.adoc index 3b5e7d9e21..78fdc5a53e 100644 --- a/modules/ROOT/pages/installing-an-enterprise-license.adoc +++ b/modules/ROOT/pages/installing-an-enterprise-license.adoc @@ -14,13 +14,14 @@ Complete the following steps to acquire and install a non-trial *Enterprise lice .. Delete the existing `muleLicenseKey.lic` file. . If you are installing your license on multiple platforms, back up your new `license.lic` file in another location before proceeding. . Make sure that the Mule Server is *stopped (not running)* and then open the terminal or command line on your system. -. On *Mac/Unix/Linux*, from the `$MULE_HOME/bin` directory, run the following command: +. Set the `JAVA_HOME` environment variable before running the `mule -installLicense` command. + -`mule -installLicense ~/license.lic` + - + -On *Windows*, first copy the `license.lic` file into the `\bin` folder, then execute the following in the command line: - + -`mule.bat -installLicense license.lic` + +.. On *Mac/Unix/Linux*, if `JAVA_HOME` isn't detected automatically, set it to avoid any issues. +.. On *Windows*, if `JAVA_HOME` isn't set, the license installation fails. +. Run the license installation command: ++ +.. On *Mac/Unix/Linux*, from the `$MULE_HOME/bin` directory, run `mule -installLicense ~/license.lic`. +.. On *Windows*, copy the `license.lic` file to the `\bin` folder, and then run `mule.bat -installLicense license.lic` from the command line. . In the `$MULE_HOME/conf` directory, Mule saves a new file called `muleLicenseKey.lic`. This shows that the license has been installed. . *Start* your Mule Server again, by the usual means. @@ -28,16 +29,29 @@ On *Windows*, first copy the `license.lic` file into the `\bin` folder, then exe Make sure that the Mule Server is *stopped* and then open the terminal or command line on your system. -To verify that Mule successfully installed your Enterprise license, run the following command: - +* To verify that Mule successfully installed your Enterprise license, run the following command: ++ `mule -verifyLicense` ++ +The command outputs: ++ +[source,xml,linenums] +---- +License information Evaluation = false, Expiration Date = Tue May 19 00:00:00 UTC 2025, Contact Name = John Doe, Contact Email Address = john.doe@company.com, Contact Telephone = 00000000, Contact Company = Local Global Ltd, Contact Country = US, Entitlements = clustering,api-gateway +---- -To uninstall a previously installed license, run the following command: - +* To uninstall a previously installed license, run the following command: ++ `mule -unInstallLicense` - ++ Sometimes the license installation fails and it might be necessary to manually delete `$MULE_HOME/conf/muleLicenseKey.lic` +* To verify the license before installing it, run the following command: ++ +`mule -verifyLicenseDetails [path_to_license]` ++ +The command outputs the same information as in `mule -verifyLicense`. + == Download your License Key File . Log in to https://help.mulesoft.com[the Support portal] using your login information. If you do not have credentials to log in, please contact your Customer Success Manager. diff --git a/modules/ROOT/pages/intro-java-integration.adoc b/modules/ROOT/pages/intro-java-integration.adoc index 76bd444709..fb64eaafcd 100644 --- a/modules/ROOT/pages/intro-java-integration.adoc +++ b/modules/ROOT/pages/intro-java-integration.adoc @@ -66,7 +66,7 @@ For more detail on this module, see xref:connectors::scripting/scripting-module. While the Scripting module is a very powerful tool that allows for interoperation with Java by executing any random set of instructions, often you simply need to just instantiate a class or execute a single method. While Mule 3 usually relies on MEL for this, the Java module was introduced in Mule 4 to allow for these use cases. Other advantages of the Java module over the Scripting module are: -* Support for xref:7.1@studio::datasense-explorer.adoc[DataSense]: Each time you execute a method, you will get DataSense for the output type and the method's input arguments. +* Support for xref:studio::datasense-explorer.adoc[DataSense]: Each time you execute a method, you will get DataSense for the output type and the method's input arguments. * UI Support: You get visual aids in terms of methods available for each class, autocompletion, and so on. === Create a New Java Instance diff --git a/modules/ROOT/pages/logging-mdc.adoc b/modules/ROOT/pages/logging-mdc.adoc index 0bc4cb1a44..469634605b 100644 --- a/modules/ROOT/pages/logging-mdc.adoc +++ b/modules/ROOT/pages/logging-mdc.adoc @@ -17,6 +17,9 @@ To use the MDC Logging operations, complete the following tasks: * Install the Mule Tracing module in your application. * Change the pattern layouts in the `log4j2.xml` file to `MDC`. +[NOTE] +If you are using Anypoint Runtime Fabric or CloudHub 2.0 to deploy your Mule applications, MDC logging isn't supported. + == Install the Mule Tracing Module Follow the next steps to install the Mule Tracing module in your application. diff --git a/modules/ROOT/pages/maven-reference.adoc b/modules/ROOT/pages/maven-reference.adoc index 36c0a9ed87..e6975d38a5 100644 --- a/modules/ROOT/pages/maven-reference.adoc +++ b/modules/ROOT/pages/maven-reference.adoc @@ -62,7 +62,7 @@ To configure the public MuleSoft repositories, add the following to your project anypoint-exchange Anypoint Exchange - https://maven.anypoint.mulesoft.com/api/v1/maven + https://maven.anypoint.mulesoft.com/api/v3/maven default @@ -71,6 +71,12 @@ To configure the public MuleSoft repositories, add the following to your project https://repository.mulesoft.org/releases/ default + + mulesoft-public + MuleSoft Public Repository + https://repository.mulesoft.org/nexus/content/repositories/public/ + default + diff --git a/modules/ROOT/pages/migration-aes.adoc b/modules/ROOT/pages/migration-aes.adoc index e7ce3a993d..5ded4b3747 100644 --- a/modules/ROOT/pages/migration-aes.adoc +++ b/modules/ROOT/pages/migration-aes.adoc @@ -5,7 +5,7 @@ endif::[] In Mule 4, the Anypoint Enterprise Security module was split into different modules: -* xref:cryptography.adoc[Cryptography Module] replaces the Mule 3 Encryption and Signature modules +* xref:securing.adoc#cryptography-module[Cryptography Module] replaces the Mule 3 Encryption and Signature modules * xref:secure-configuration-properties.adoc[Secure Configuration Properties Module] replaces the Mule 3 Secure Property Placeholder * xref:connectors::validation/validation-connector.adoc[Validation Module] incorporated functionality from the Mule 3 Filters Module * xref:connectors::oauth/oauth2-provider-documentation-reference.adoc[OAuth2 Provider] replaces the Mule 3 OAuth2 Provider diff --git a/modules/ROOT/pages/migration-api-gateways-policies.adoc b/modules/ROOT/pages/migration-api-gateways-policies.adoc index 6a1353cb1d..a8cc2f044b 100644 --- a/modules/ROOT/pages/migration-api-gateways-policies.adoc +++ b/modules/ROOT/pages/migration-api-gateways-policies.adoc @@ -6,7 +6,7 @@ endif::[] // authors: Federico Balbi and Nahuel Dalla Vecchia (assigned by Eva) // Explain generally how and why things changed between Mule 3 and Mule 4. -In Mule 4, policies underwent major changes. A full explanation of them is available in https://docs.mulesoft.com/api-manager/custom-policy-4-reference[Custom Policy General Reference (Nov 2017)] +In Mule 4, policies underwent major changes. == Defining Policy Behavior diff --git a/modules/ROOT/pages/migration-cheat-sheet.adoc b/modules/ROOT/pages/migration-cheat-sheet.adoc index fc79555801..018a1b40c9 100644 --- a/modules/ROOT/pages/migration-cheat-sheet.adoc +++ b/modules/ROOT/pages/migration-cheat-sheet.adoc @@ -30,4 +30,4 @@ To help you move from Mule 3 to Mule 4, we built this index to help you find the * Migrating custom components: You can use the xref:mule-sdk::index.adoc[Mule SDK] to create your own reusable components * Migrating DevKit based components: There's a xref:mule-sdk::dmt.adoc[DevKit Migration Tool] that helps to migrate DevKit projects for Mule 3 into Mule SDK ones. * xref:migration-transports.adoc[Transport service overrides]: Covers how to migrate from generic transports. - +* xref:mule-high-availability-ha-clusters.adoc#configure-the-performance-profile[]: At a container level, change the property from `mule.clusterPartitioningMode=OPTIMIZE_PERFORMANCE` to `mule.cluster.storeprofile=performance`. At an individual application level, you can configure the store profile for a specific Mule application. \ No newline at end of file diff --git a/modules/ROOT/pages/migration-connectors-database.adoc b/modules/ROOT/pages/migration-connectors-database.adoc index 51cf136949..3e5a449919 100644 --- a/modules/ROOT/pages/migration-connectors-database.adoc +++ b/modules/ROOT/pages/migration-connectors-database.adoc @@ -221,7 +221,7 @@ Microsoft SQL Server, MySQL, Derby, Oracle configurations require a driver. ---- WARNING: Because of the new Mule 4 ClassLoading mechanism, this dependency must be declared as a Shared Library to be -exported to the DB Connector. Using Studio or Flow Designer, this will be automatically configured. +exported to the DB Connector. Using Studio, this will be automatically configured. //TODO LINK TO HOW TO ADD A SHARED LIBRARY OR THE USER WON'T NEVER REALIZE HOW TO DO IT @@ -238,7 +238,6 @@ In Mule 4, you can add child elements for these settings under the database conn * <> * <> * <> -* <> [[db_transactions]] ==== Database Transactions @@ -369,7 +368,7 @@ The examples below show changes to the XML for these settings: [[database_operations_overview]] == Database Connector Operations -* Query for SQL query text and input parameters (as shown here in <>). +* Query for SQL query text and input parameters * Streaming strategy settings (as shown here in <>) * <> * Query settings diff --git a/modules/ROOT/pages/migration-connectors-xml.adoc b/modules/ROOT/pages/migration-connectors-xml.adoc index 2f943b68e1..f0a59e5f77 100644 --- a/modules/ROOT/pages/migration-connectors-xml.adoc +++ b/modules/ROOT/pages/migration-connectors-xml.adoc @@ -250,7 +250,7 @@ This validator will raise an `XML-MODULE:SCHEMA_NOT_HONOURED` error. == Installing the XML Module -To use the XML module, simply add it to your application using the Studio palette or Flow Designer card, or add the following dependency in your `pom.xml` file: +To use the XML module, simply add it to your application using the Studio palette, or add the following dependency in your `pom.xml` file: [source,xml,linenums] ---- diff --git a/modules/ROOT/pages/migration-mel.adoc b/modules/ROOT/pages/migration-mel.adoc index 6ae954382d..c0d542ae69 100644 --- a/modules/ROOT/pages/migration-mel.adoc +++ b/modules/ROOT/pages/migration-mel.adoc @@ -43,7 +43,7 @@ attachments or exception payloads MEL is recommended. The next sections show how to adapt some uses of MEL to Mule 4. === Accessing Context Variables -Except for the following changes, xref:3.9@mule-runtime::mel-cheat-sheet.adoc#server-mule-application-and-message-variables[context variables] +Except for the following changes, context variables (also called Mule Runtime variables) remain the same in DataWeave: [cols="1a,1a", options="header"] @@ -421,7 +421,6 @@ xref:dataweave.adoc[DataWeave Language] https://blogs.mulesoft.com/dev/mule-dev/why-dataweave-main-expression-language-mule-4/[Why DataWeave is the Main Expression Language in Mule 4 Beta] -xref:3.9@mule-runtime::mule-expression-language-mel.adoc[Mule Expression Language (MEL)] (3.9) //// diff --git a/modules/ROOT/pages/migration-module-vm.adoc b/modules/ROOT/pages/migration-module-vm.adoc index 8b21f68bdd..8c5ffa0cd6 100644 --- a/modules/ROOT/pages/migration-module-vm.adoc +++ b/modules/ROOT/pages/migration-module-vm.adoc @@ -6,7 +6,7 @@ endif::[] The VM transport was completely rewritten. It evolved away from the Mule 3 transport model into an operation-based connector. This enables many new capabilities: * The ability to consume messages from a queue on demand, unlike the old transport, which only provided a polling inbound endpoint. -* Enhanced xref:7.1@studio::datasense-explorer.adoc[DataSense]. +* Enhanced xref:studio::datasense-explorer.adoc[DataSense]. [[whats_covered_here]] == What's Covered Here? diff --git a/modules/ROOT/pages/migration-munit-test-structure.adoc b/modules/ROOT/pages/migration-munit-test-structure.adoc index 89b203bc9a..8c6ec96add 100644 --- a/modules/ROOT/pages/migration-munit-test-structure.adoc +++ b/modules/ROOT/pages/migration-munit-test-structure.adoc @@ -58,4 +58,4 @@ The following examples compare MUnit tests in 1.x to 2.x. == See Also -* xref:https://docs.mulesoft.com/munit/2.2/munit-test-concept[MUnit 2 Test structure] +* xref:munit::munit-test-concept.adoc[MUnit 2 Test structure] diff --git a/modules/ROOT/pages/migration-transports.adoc b/modules/ROOT/pages/migration-transports.adoc index 3b0ae63b1b..9b7f9756be 100644 --- a/modules/ROOT/pages/migration-transports.adoc +++ b/modules/ROOT/pages/migration-transports.adoc @@ -84,7 +84,7 @@ code does, so you might: a xref:dataweave.adoc[DataWeave] transformation, a xref:connectors::java/java-module.adoc[Java Module], or a xref:connectors::scripting/scripting-module.adoc[Scripting Module]). -* Use the xref:1.1@mule-sdk::index.adoc[Mule SDK] to create a connector +* Use the xref:mule-sdk::index.adoc[Mule SDK] to create a connector that provides your customized transport functionality. As a starting point, you can use open-source connectors as dependencies (for example, https://github.com/mulesoft/mule-http-connector). @@ -100,6 +100,6 @@ requests. xref:migration-connectors.adoc[Migrating Connectors and Modules to Mule 4] -xref:intro-java-integration[Java Integration] +xref:intro-java-integration.adoc[Java Integration] -xref:1.1@mule-sdk::index.adoc[Mule SDK] +xref:mule-sdk::index.adoc[Mule SDK] diff --git a/modules/ROOT/pages/mmp-concept.adoc b/modules/ROOT/pages/mmp-concept.adoc index c007ea8123..0a79b44211 100644 --- a/modules/ROOT/pages/mmp-concept.adoc +++ b/modules/ROOT/pages/mmp-concept.adoc @@ -101,6 +101,8 @@ For example: mvn mule:deploy -Dmule.artifact=myProject/myArtifact.jar ---- +This property is valid for CloudHub, standalone, or on-premises deployments. + === mule:undeploy Goal This goal removes an application from any of the application deployment targets. It uses the information from the plugin configuration to remove the application from the defined deployment target. diff --git a/modules/ROOT/pages/mule-app-dev-hellomule.adoc b/modules/ROOT/pages/mule-app-dev-hellomule.adoc index 3a5e28a974..cbf4579e3b 100644 --- a/modules/ROOT/pages/mule-app-dev-hellomule.adoc +++ b/modules/ROOT/pages/mule-app-dev-hellomule.adoc @@ -1,4 +1,4 @@ -= Hello Mule Tutorial += Tutorial: Create a "Hello World" Mule app ifndef::env-site,env-github[] include::_attributes.adoc[] endif::[] @@ -220,5 +220,5 @@ image::mruntime-hellomule-xml.png[Hello Mule XML Configuration] == See Also -* xref:mule-app-tutorial.adoc[Mule App Development Tutorial] +* xref:mule-app-tutorial.adoc[] * xref:dataweave::dataweave-quickstart.adoc[DataWeave Quickstart] diff --git a/modules/ROOT/pages/mule-app-dev.adoc b/modules/ROOT/pages/mule-app-dev.adoc index 831b612927..6ecd0f5aa7 100644 --- a/modules/ROOT/pages/mule-app-dev.adoc +++ b/modules/ROOT/pages/mule-app-dev.adoc @@ -18,12 +18,12 @@ event and to pass it to either single or multiple threads. To get started with Mule application development, you can follow the steps in these tutorials: -* xref:mule-app-dev-hellomule.adoc[Hello Mule Tutorial] +* xref:mule-app-dev-hellomule.adoc[Create a "Hello World" Mule app] + Build a Mule application that interacts with a user in a simple HTTP request-response flow. + -* xref:mule-app-tutorial.adoc[Mule Application Development Tutorial] +* xref:mule-app-tutorial.adoc[Create a Mule app that uses the Database Connector and DataWeave] + Build a Mule application that retrieves data from a database and transforms it to a new structure. @@ -158,10 +158,7 @@ providing cryptographic and other capabilities, such as FIPS compliance. == Development Environments You can develop a Mule application using -xref:studio::index.adoc[Anypoint Studio] (an Eclipse-based IDE), -xref:design-center::about-designing-a-mule-application.adoc[Flow Designer] -(a cloud-based application in Design Center, on Anypoint Platform), -or, if you are an advanced developer, in your own IDE. +xref:studio::index.adoc[Anypoint Studio] (an Eclipse-based IDE), or, if you are an advanced developer, in your own IDE. For example, in Studio, you build and design a Mule application in a project that contains one or more XML-based files. A Mule project supports all the @@ -169,7 +166,7 @@ dependencies required for development. The xref:studio::index.adoc#package-explorer[Package Explorer] view in Studio provides access to the project folders and files that make up a Mule project. Studio provides a design-time environment in which you can also build, run, and test -your Mule application. Flow Designer supports a cloud-based version of a Mule project. +your Mule application. [[version]] == Mule Versioning diff --git a/modules/ROOT/pages/mule-app-tutorial.adoc b/modules/ROOT/pages/mule-app-tutorial.adoc index f0f8cc99a9..22d10e3038 100644 --- a/modules/ROOT/pages/mule-app-tutorial.adoc +++ b/modules/ROOT/pages/mule-app-tutorial.adoc @@ -1,9 +1,9 @@ -= Mule App Development Tutorial += Tutorial: Create a Mule app that uses the Database Connector and DataWeave ifndef::env-site,env-github[] include::_attributes.adoc[] endif::[] -Most integrations require a change to the structure of data as it moves from source to destination. Within a Mule app, you can use the drag-n-drop interface of the Transform Message component to map data from one field or format to another, or you can write mappings by hand within DataWeave scripts. You typically build Mule apps in Studio or Design Center, but you can even write Mule app configurations by hand in XML. This tutorial uses Studio. +Most integrations require a change to the structure of data as it moves from source to destination. Within a Mule app, you can use the drag-n-drop interface of the Transform Message component to map data from one field or format to another, or you can write mappings by hand within DataWeave scripts. You typically build Mule apps in Studio, but you can even write Mule app configurations by hand in XML. This tutorial uses Studio. Using a small data set and a training API available on Exchange, you'll create a project and define the transformation mapping from the API into a different structure and protocol. You'll use the drag-n-drop and also see the xref:dataweave.adoc[DataWeave] code that defines the transformation. After completing this tutorial, you'll be ready to create your own data mappings. diff --git a/modules/ROOT/pages/mule-deployment-model.adoc b/modules/ROOT/pages/mule-deployment-model.adoc index 9f3cf4b338..68ab92cafa 100644 --- a/modules/ROOT/pages/mule-deployment-model.adoc +++ b/modules/ROOT/pages/mule-deployment-model.adoc @@ -58,7 +58,7 @@ To reload an application, you can: For example, if you want to modify one of your custom classes, make your changes to the custom class, copy the updated class to the java directory, and then `touch` the anchor file. -== Communication Between Mule Instances and the Management Pane +== Communication Between Mule Instances and the Management Plane Mule instances run independently from the Management pane to execute integration logic and serve API requests. This architecture enables you to deploy Mule runtime engine strategically and ensures that it is not a bottleneck to communications. + When an event occurs that causes the Mule instances to become disconnected from the Management pane, the instances continue to run as designed to execute integration and serve APIs without interruption. However, new or updated policies are not pulled and updated until the connection is reestablished. diff --git a/modules/ROOT/pages/mule-high-availability-ha-clusters.adoc b/modules/ROOT/pages/mule-high-availability-ha-clusters.adoc index 93e14085b1..952887e909 100644 --- a/modules/ROOT/pages/mule-high-availability-ha-clusters.adoc +++ b/modules/ROOT/pages/mule-high-availability-ha-clusters.adoc @@ -7,9 +7,16 @@ endif::[] [IMPORTANT] For an equivalent to clustering in CloudHub, see xref:runtime-manager::cloudhub-fabric.adoc[CloudHub HA] for details about how workers can be shared or doubled to scale your application and provide high availability. -Mule Enterprise Edition supports scalable clustering to provide high availability (HA) for applications. +Mule Enterprise Edition supports scalable clustering to provide high availability (HA) for on-premises applications. + +A cluster is a set of Mule runtime engines that acts as a unit. In other words, a cluster is a virtual server composed of multiple nodes (Mule runtime engines). The nodes in a cluster communicate and share Object Store and VM queue data through a distributed shared memory grid. This means that the data is replicated across memory in different machines. + +This shared memory grid, replicated state applies to: + +* persistent and non-persistent Object stores +* persistent and transient VM queues +* Mule runtime LockFactory -A cluster is a set of Mule runtime engines that acts as a unit. In other words, a cluster is a virtual server composed of multiple nodes (Mule runtime engines). The nodes in a cluster communicate and share information through a distributed shared memory grid. This means that the data is replicated across memory in different machines. image::cluster.png[Cluster] @@ -18,15 +25,15 @@ Contact your customer service representative about pricing for this feature. == The Benefits of Clustering -By default, clustering Mule runtime engines ensures high system availability. If a Mule runtime engine node becomes unavailable due to failure or planned downtime, another node in the cluster can assume the workload and continue to process existing events and messages. The following figure illustrates the processing of incoming messages by a cluster of two nodes. Notice that the processing load is balanced across nodes: Node 1 processes message 1 while Node 2 simultaneously processes message 2. +By default, clustering Mule runtime engines ensures high system availability. If a Mule runtime engine node becomes unavailable due to failure or planned downtime, another node in the cluster can assume the workload and continue to process messages from VM queues and to service other requests. The following figure illustrates the processing of incoming messages by a cluster of two nodes. Notice that the processing load is balanced across nodes: Node 1 processes message 1 while Node 2 simultaneously processes message 2. image::failovernofail.png[FailoverNoFail] -If one node fails, the other available nodes pick up the work of the failing node. As shown in the following figure, if Node 2 fails, Node 1 processes both message 1 and message 2. +If one node fails, the other available nodes pick up the work of the failing node. An external load balancer redirects the failing node's share of traffic to an active node, or through VM queues to enable the active nodes to continue to process in-flight messages. As shown in the following figure, if Node 2 fails, Node 1 processes both message 1 and message 2. image::failovernode2fail.png[FailoverNode2Fail] -Because all nodes in a cluster of Mule runtime engines process messages simultaneously, clusters can also improve performance and scalability. Compared to a single node instance, clusters can support more users or improve application performance by sharing the workload across multiple nodes or by adding nodes to the cluster. +Because all nodes in a cluster of Mule runtime engines are "active active" and can process messages simultaneously, clusters can also improve performance and scalability. Compared to a single node instance, clusters can support more users or improve application performance by sharing the workload across multiple nodes or by adding nodes to the cluster. Note that not all applications perform better when horizontally scaling or clustering. Performance depends on the nature of the work to be shared with additional nodes. Performance of some applications with a heavy dependency on Object Stores can degrade because of the work required to replicate or coordinate access to the data within the shared memory grid. The following figure illustrates workload sharing in more detail. Both nodes process messages related to order fulfillment. However, when one node is heavily loaded, it can move the processing for one or more steps in the process to another node. Here, processing of the Process order discount step is moved to Node 1, and processing of the Fulfill order step is moved to Node 2. @@ -42,7 +49,7 @@ If you divide your flows into a series of steps and connect these steps with a c You can set up an alert to appear when a node goes down and when a node comes back up. [NOTE] -All Mule runtime engines in a cluster actively process messages. Note that each Mule node is also internally scalable – a single node can scale by taking advantage of multiple cores. Mule operates as a single node in a cluster, even when it uses multiple cores. +All Mule runtime engines in a cluster actively process messages. Note that each Mule node is also vertically scalable – a single node can scale by taking advantage of multiple cores or additional memory. Mule operates as a single node in a cluster, even when it uses multiple cores. === Concurrency Issues Solved by Clusters @@ -55,20 +62,20 @@ All Mule instances access the same Mule file folders concurrently, which can lea All Mule instances get the same TCP requests and then process duplicate messages. * JMS topics. + -All Mule instances connect to the same JMS topic, which might lead to the repeated processing of messages when scaling the non-clustered Mule instance horizontally. +All Mule instances connect to the same JMS topic, which might lead to the repeated processing of messages when scaling the non-clustered Mule instance horizontally. To the JMS broker, the instances appear as separate subscribers, all of which get a copy of each message. This behavior is rarely required. To avoid this scenario, a "shared subscriber" configuration is available to instruct the JMS broker to treat all instances as a combined subscriber and to give them each separate messages. * JMS request-reply/request-response. + All Mule instances are listening for messages in the same response queue. This implies that a Mule instance might obtain a response that isn't correlated to the request it sent and might result in incorrect responses or make a flow fail with timeout. -* Idempotent Redelivery Policy. + -Idempotency doesn’t work if the same request is received by different Mule instances. Duplicated messages aren’t possible. +* Idempotent Redelivery Policy and Idempotent Message Validation. + +Idempotency doesn’t work correctly with horizontal scaling if the same request is received by different Mule instances and the Object Store contents used by the Redelivery policy or the Idempotency Message Validator is localized. For a cluster sharing Object Store values used by these idempotency features, duplicate messages aren’t possible because all nodes are sharing the list of already-processed identifiers. * Salesforce streaming API. + If multiple instances of the same application are deployed, they will fail because the API only supports a single consumer. There is no failover support if the instance connected is stopped or crashes. == Prerequisites -* A cluster requires at least two Mule runtime engine instances, each one running on different machines. +* A cluster requires at least two Mule runtime engine instances, each one running on different machines to avoid a single point of failure on the machine. * Mule high availability (HA) requires a reliable network connection between servers to maintain synchronization between the nodes in the cluster. * Keep the ports configured for the Mule cluster open. ** If you configure your cluster through Runtime Manager and you use the default ports, keep TCP ports `5701`, `5702`, and `5703` open. @@ -88,24 +95,29 @@ High Availability is a method of designing a computer system to prevent any down == Cluster Design and Management -Anypoint Runtime Manager enables you to set up a cluster of Mule instances, and then deploy an application to run on the cluster. You can also monitor the status information for clusters and individual nodes. When clustered, you can easily manage several servers as one. +Anypoint Runtime Manager enables you to set up a customer-hosted cluster of Mule instances, and then deploy an application to run on the cluster. You can also monitor the status information for clusters and individual nodes. When clustered, you can easily manage several servers as one. [NOTE] For more detailed information about cluster management, see xref:runtime-manager::managing-servers.adoc[Managing Servers] in Runtime Manager. +For Anypoint Runtime Fabric, an option exists at deployment time to provision the Mule runtime replicas in cluster mode. + A Mule Cluster consists of two or more Mule runtime engines, or nodes, grouped together and treated as a single unit. With the initial configuration, MuleSoft recommends that you scale a cluster to no more than eight Mule runtime engines. With Anypoint Runtime Manager, you can deploy, monitor, or stop all the Mule runtime engines in a cluster as if they were a single Mule runtime engine. +Note: CloudHub doesn't use this clustering configuration for provisioned workers, but it has other High Availability features to provide an equivalent experience, such as externalized state management via Anypoint Object Store (OSv2) or the persistent queues feature (which moves the VM queues outside the workers so they can be shared and survive re-provisioning). + All the nodes in a cluster share memory as illustrated below: image::topology-4-cluster.png[topology_4-cluster] -Mule uses an active-active model to cluster Mule runtime engines. The benefit of this model over an active-passive approach is that your application runs in all nodes, splitting message processing with the other nodes in your cluster, which expedites processing. +[NOTE] +To ensure operational performance, Mule uses an active-active model to cluster Mule runtime engines. The benefit of this model over an active-passive approach is that your application runs in all nodes, splitting message processing with the other nodes in your cluster, which expedites processing. === Primary Node Difference -In an active-active model, there is no primary node. However, one of the nodes acts as the primary polling node. This means that sources can be configured to only be used by the primary polling node so that no other node reads messages from that source. +In an active-active model, all nodes are able to perform processing. However, one of the nodes acts as the primary node, which runs the schedulers and any event sources marked as "primary node only". This model enables you to configure sources to run on a single, primary polling node in a cluster and prevent other nodes in the cluster from reading messages from those sources. This feature works differently depending on the source type: @@ -121,9 +133,11 @@ This feature works differently depending on the source type: ---- -== Queues +This example might be for a use case where the application is receiving messages from JMS where serial/single message at a time processing is critical. The default configuration of the JMS Connector's On New Message source has "primary node only" selected by default, but most connector sources don't have "primary node only" selected. The decision on the default configuration lies with the developers of the connector. For a use case where all nodes should perform processing the developer would de-select the primaryNodeOnly value so that all cluster nodes enable the source. + -You can set up a VM queue explicitly to load balance across Mule runtime engines (nodes). Thus, if your entire application flow contains a sequence of child flows, the cluster can assign each successive child flow to whichever Mule runtime engine happens to be available at the time. The cluster can potentially process a single message on multiple nodes as it passes through the VM endpoints in the application flow, as illustrated below: +== Queues +Execution of code and calling of flows via flow reference will happen on the same node that execution of the event began. But in order to share or distribute execution across clustered Mule runtimes you can set up and instead publish to a VM queue explicitly to load balance across nodes. Both persistent and transient VM Queues will use the shared memory grid of the cluster and any transition through a VM queue will potentially jump to another active node. Thus, if your entire application flow contains a sequence of child flows, linked via a publish/listener for a VM queue, the cluster can assign each successive child flow to whichever Mule runtime engine happens to be available at the time. The cluster can potentially process a single message on multiple nodes as it passes through the VM queues in the application flow, as illustrated below: image::load-balancing.png[load_balancing] @@ -141,6 +155,9 @@ Connectors such as JMS, VM, and JDBC provide built-in transactional support, thu You must use XA transactions to move messages between dissimilar connectors that support transactions. This ensures that the Mule runtime engine commits associated transactions from all the dissimilar connectors as a single unit. +[NOTE] +Transactions can't span interactions or operations with systems that the connector doesn't support transactions, so if any of the involved operations can't be included in the transaction an alternative pattern such as https://www.cs.cornell.edu/andru/cs711/2002fa/reading/sagas.pdf[Sagas] would be an alternative. + == Cluster Support for Connectors All Mule connectors are supported within a cluster. Because of differences in the way different connectors access inbound traffic, the details of this support vary. In general, outbound traffic acts the same way inside and outside a cluster. Mule runtimes support three basic types of connectors: @@ -161,11 +178,11 @@ Listener-based connectors read data using a protocol that fully supports concurr Listener-based connectors are supported in clusters as described below: * Listener-based connectors fully support multiple readers and writers. No special considerations apply either to input or to output. -* Note that, in a cluster, VM connector queues are a shared, cluster-wide resource. The cluster will automatically synchronize access to the VM connector queues. Because of this, any cluster node can process a message written to a VM queue. This makes VM ideal for sharing work among cluster nodes. +* Note that, in a cluster, VM connector queues (both persistent and transient) are a shared, cluster-wide resource. The cluster will automatically synchronize access to the VM connector queues. Because of this, any cluster node can process a message written to a VM queue. This makes VM ideal for sharing work among cluster nodes. === Resource-based Connectors -Resource-based connectors read data from a resource that allows multiple concurrent accessors, but does not natively coordinate their use of the resource. Examples of resource-based connectors include File, FTP, SFTP, E-mail, and JDBC. +Resource-based connectors read data from a resource that allows multiple concurrent accessors, but doesn't natively coordinate their use of the resource. Examples of resource-based connectors include File, FTP, SFTP, E-mail, and JDBC. Resource-based connectors are supported in clusters as described below: @@ -203,39 +220,41 @@ Ensuring that all cluster nodes reside on the same LAN is the best practice to l === Distributed Data-center Clustering -Linking cluster nodes through a WAN network introduces many possible points of failure, such as external routers and firewalls, which can prevent proper synchronization between cluster nodes. This not only affects performance but also requires you to plan for possible side effects in your app. For example, when two cluster nodes reconnect after getting cut off by a failed network link, the ensuing synchronization process can cause messages to be processed twice, which creates duplicates that must be handled in your application logic. +Linking cluster nodes through a WAN network introduces many possible points of failure, such as external routers and firewalls, which can prevent proper synchronization between cluster nodes. This not only affects performance but also requires you to plan for possible side effects in your app. For example, when two cluster nodes reconnect after getting cut off by a failed network link, the ensuing synchronization process can cause messages to be processed twice, which creates duplicates that must be handled in your application logic. It might also mean that applications using Object Store end up with inconsistent state due to lack of ability for communication between the nodes. Another issue which could occur is that multiple nodes become "primary" due to separate nodes believing they are the sole node in the cluster. Note that it is possible to use nodes of a cluster located in different data centers and not necessarily located on the same LAN, but some restrictions apply. -To prevent this behavior, it is necessary to enable the Quorum Protocol. This protocol is used to allow one set of nodes to continue processing data while other sets do nothing with the shared data until they reconnect. Basically, when a disconnection occurs, only the portion with the most nodes will continue to function. For instance, assume two data centers, one with three nodes and another with two nodes. If a connectivity problem occurs in the data center with two nodes, then the data center with three nodes will continue to function, and the second data center will not. If the three-node data center goes offline, none of your nodes will function. To prevent this outage, you must create the cluster in at least three data centers with the same number of nodes. It is unlikely for two data centers to crash, so if just one data center goes offline, the cluster will always be functional. +To prevent this "split brain" processing behavior, it is necessary to enable the Quorum Protocol. This protocol is used to allow one set of nodes to continue processing data while other sets do nothing with the shared data until they reconnect. Basically, when a disconnection occurs, only the portion with the most nodes will continue to function. For instance, assume two data centers, one with three nodes and another with two nodes. If a connectivity problem occurs in the data center with two nodes, then the data center with three nodes will continue to function, and the second data center will not. If the three-node data center goes offline, none of your nodes will function. To prevent this outage, you must create the cluster in at least three data centers with the same number of nodes. It is unlikely for two data centers to crash, so if just one data center goes offline or is separated from the others by a network fault, the cluster will always be functional. -IMPORTANT: A cluster partition that does not have enough nodes to function will continue reacting to external system calls, but all operations over the object stores will fail, preventing data generation. +IMPORTANT: A cluster partition that doesn't have enough nodes to function will continue reacting to external system calls, but all operations over the object stores will fail, preventing data generation. ==== Limitations * Quorum is only supported in Object Store-related operations. -* Distributed locking is not supported, which affects: +* Distributed locking isn't supported, which affects: - File/FTP connector polling for files concurrent - Idempotent Redelivery Policy component - Idempotent Message Filter component -* In-memory messaging is not supported, which affects: +* In-memory messaging isn't supported, which affects: - VM connector * The Quorum feature can only be configured manually. +* Batch jobs don't use high availability features. == Clustering and Load Balancing -When Mule clusters are used to serve TCP requests (where TCP includes SSL/TLS, UDP, Multicast, HTTP, and HTTPS), some load balancing is needed to distribute the requests among the clustered instances. There are various software load balancers available, two of them are: +When Mule clusters are used to serve TCP requests (where TCP includes SSL/TLS, UDP, Multicast, HTTP, and HTTPS), some load balancing is needed to distribute the requests among the clustered instances. Though Anypoint Runtime Fabric includes load-balancer capability as part of the underlying Docker Kubernetes (K8s) infrastructure, customer-hosted, manually-provisioned clusters require you to supply a third party load balancer or perform client-side load balancing and fail-over. +There are various software load balancers available, two of them are: * NGINX, an open-source HTTP server and reverse proxy. You can use NGINX's `HttpUpstreamModule` for HTTP(S) load balancing. * The Apache web server, which can also be used as an HTTP(S) load balancer. -Many hardware load balancers can also route both TCP and HTTP or HTTPS traffic +Many hardware load balancers can also route both TCP and HTTP or HTTPS traffic. [[cluster-high-performance]] == Clustering for High Performance -This section applies only for on-premises deployments. High performance is implemented differently on CloudHub and Pivotal Cloud Foundry. + -See xref:runtime-manager::deployment-strategies.adoc[Deployment Strategies] for more information about each of these deployments. +This section applies only for customer-hosted, manually provisioned cluster deployments. + +See xref:runtime-manager::deployment-strategies.adoc[Deployment Strategies] for more information about other deployment options. If high performance is your primary goal (rather than reliability), you can configure a Mule cluster or an individual application for maximum performance using a performance profile. By implementing the performance profile for specific applications within a cluster, you can maximize the scalability of your deployments while deploying applications with different performance and reliability requirements in the same cluster. Performance profiles that you configure at the container level apply to all applications within the container. Application-level configuration overrides container-level configuration. @@ -249,7 +268,7 @@ Setting the performance profile has two effects: [WARNING] When one node goes down, the data associated to it is lost. -Setting the performance profile does not affect memory sharing. In cluster mode, Mule always distributes and shares the object stores between nodes. +Setting the performance profile doesn't affect memory sharing. In cluster mode, Mule always distributes and shares the object stores between nodes. === Configure the Performance Profile @@ -289,7 +308,7 @@ Remember that an application-level configuration overrides a container-level con [WARNING] ==== -In cases of high load with endpoints that do not support load balancing, applying the performance profile might degrade performance. If you are using a File-based connector with an asynchronous processing strategy, JMS topics, multicasting, or HTTP connectors without a load balancer, the high volume of messages entering a single node can cause bottlenecks, and thus it can be better for performance to turn off the performance profile for these applications. +In cases of high load with endpoints that don't support load balancing, applying the performance profile might degrade performance. If you are using a File-based connector with an asynchronous processing strategy, JMS topics, multicasting, or HTTP connectors without a load balancer, the high volume of messages entering a single node can cause bottlenecks, and thus it can be better for performance to turn off the performance profile for these applications. ==== You can also choose to define the minimum number of machines that a cluster requires to remain in an operational state. This configuration grants you a consistency improvement in the overall performance. @@ -326,6 +345,7 @@ There are a number of recommended practices related to clustering. These include * Use distributed stores such as those used with the VM or JMS connector – these stores are available to an entire cluster. This is preferable to the non-distributed stores used with connectors such as File, FTP, and JDBC – these stores are read by a single node at a time. * Use the VM connector to get optimal performance. Use the JMS connector for applications where data needs to be saved after the entire cluster exits. * Implement reliability patterns to create high reliability applications. +* In HA cluster mode, all Object Store content, both in-memory and persistent (as defined by the `persistent` parameter), is stored in the distributed memory grid. As long as the cluster maintains quorum, this data survives application redeploys or restarts. To fully clear the Object Store content, all cluster nodes must be stopped before restarting. For scenarios where data persistence is required even after a full cluster shutdown, configure the Object Store to use a JDBC-based store instead of the memory grid. See xref:creating-and-managing-a-cluster-manually.adoc#object-store-persistence[Object Store Persistence]. == See Also diff --git a/modules/ROOT/pages/mule-server-notifications.adoc b/modules/ROOT/pages/mule-server-notifications.adoc index afb01df85f..ab7f26517d 100644 --- a/modules/ROOT/pages/mule-server-notifications.adoc +++ b/modules/ROOT/pages/mule-server-notifications.adoc @@ -279,8 +279,8 @@ All notifications extend `java.util.EventObject`, and you can use the `getSource |Exception Strategy Notification |ComponentLocation |Component name |The flow component that triggered this notification. -|Extension Notification |Object | |The payload can change from one extension to another. +|Extension Notification |Object |Extension name |The payload can change from one extension to another. -|Connector-Message Notification ||ComponentLocation |Component name |The flow component that triggered this notification. +|Connector-Message Notification |ComponentLocation |Component name |The flow component that triggered this notification. |=== diff --git a/modules/ROOT/pages/mule-upgrade-tool.adoc b/modules/ROOT/pages/mule-upgrade-tool.adoc index 55c340cf34..e34ed6e2e4 100644 --- a/modules/ROOT/pages/mule-upgrade-tool.adoc +++ b/modules/ROOT/pages/mule-upgrade-tool.adoc @@ -4,7 +4,11 @@ include::release-notes::partial$mule-upgrade-tool/mule-upgrade-tool-1.1.0.adoc[t If the Runtime Manager agent is installed in your current Mule instance, the upgrade tool also updates the agent version as part of the upgrade process. -The Mule upgrade tool supports upgrading clustered Mule instances by manually upgrading each node using the tool. For additional information on how to proceed with the upgrade, see <>. Though highly recommended, using the tool is not strictly necessary to upgrade your current Mule instance. For a completely manual upgrade, see xref:release-notes::mule-runtime/updating-mule-4-versions.adoc#mulerunvers[Upgrading an On-Premises Mule Instance Managed Through Runtime Manager]. +Upgrading the Anypoint Monitoring agent isn't supported. To proceed with an upgrade, you must first uninstall the agent, and reinstall it after the Mule instance upgrade is complete. See xref:monitoring::am-installing.adoc#update-the-anypoint-monitoring-agent[Update the Anypoint Monitoring Agent]. + +The upgrade tool doesn't modify Mule applications running on the runtime you're upgrading. + +The Mule upgrade tool supports upgrading clustered Mule instances by manually upgrading each node using the tool. For additional information on how to proceed with the upgrade, see <>. Though highly recommended, using the tool isn't strictly necessary to upgrade your current Mule instance. For a completely manual upgrade, see xref:release-notes::mule-runtime/updating-mule-4-versions.adoc#mulerunvers[Upgrading an On-Premises Mule Instance Managed Through Runtime Manager]. == Before You Begin @@ -156,8 +160,7 @@ The following are the supported options for this subcommand. The following are some of the most common error messages from the upgrade tool and include the error description and a proposed solution. -=== Missing required subcommand - +=== Missing Required Subcommand ---- ./upgrade-tool @@ -180,13 +183,11 @@ Ensure you are running the `upgrade-tool` command and specifying any of the supp The `-h` or `--help` options enable you to get additional details for a certain subcommand, for example: - ---- $ ./upgrade-tool rollback --help ---- -=== Missing required option - +=== Missing Required Option ---- ./upgrade-tool upgrade @@ -213,7 +214,6 @@ Run the following command to get information about which arguments are required After running the command, the tool outputs additional information: - ---- Mule Runtime Upgrade Tool ───────────────────────── @@ -235,8 +235,7 @@ Upgrades a Mule Runtime to a newer version The `Usage:` line specifies which options and arguments are optional by enclosing them in square brackets (`[`,`]`). Options and arguments without square brackets are mandatory. -=== No space left on device - +=== No Space Left on Device ---- $ ./upgrade-tool upgrade -n /tmp/mule-enterprise-standalone-4.4.0-20211104 @@ -263,15 +262,13 @@ enough disk space available and that any other requirements are met. On Linux environments, use the `df` command to check available disk space: - ---- $ df -h /opt Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg-opt 419G 205G 193G 52% /opt ---- -=== Version should be newer - +=== Version Should Be Newer ---- ./upgrade-tool upgrade -n /tmp/mule-enterprise-standalone-4.4.0-20211104 @@ -292,7 +289,7 @@ This error occurs when the upgrade command specifies a new Mule distribution tha Verify that the downloaded Mule distribution is in a later version than your current Mule instance. If you continue to receive this error message during the upgrade, it means that the current Mule instance is already updated or running the latest available version. -=== Missing reading permissions +=== Missing Reading Permissions ---- ./upgrade-tool upgrade -n /tmp/mule-enterprise-standalone-4.4.0-20211104 @@ -315,7 +312,7 @@ Read permissions in the new Mule distribution are required but not set for the u Obtain read permission for files that the upgrade identifies as unreadable. Contact your system administrator for assistance, if necessary. -=== Missing writing permissions +=== Missing Writing Permissions ---- ./upgrade-tool upgrade -n /tmp/mule-enterprise-standalone-4.4.0-20211104 @@ -338,7 +335,7 @@ Write permissions in the old Mule distribution are required but not set for the Obtain write permission to files that the upgrade tool identifies. Contact your system administrator for assistance, if necessary. -=== Mule Runtime should be stopped +=== Mule Runtime Should Be Stopped ---- ./upgrade-tool upgrade -n /tmp/mule-enterprise-standalone-4.4.0-20211104 @@ -361,7 +358,7 @@ The upgrade tool detected that Mule is running. Stop Mule before starting the upgrade process. To check the current status, use the command `${MULE_HOME}/bin/mule status`. -=== Mule version is not supported for an upgrade +=== Mule Version Isn't Supported for an Upgrade ---- ./upgrade-tool upgrade -n /tmp/mule-enterprise-standalone-4.4.0-20211104 diff --git a/modules/ROOT/pages/notifications-configuration-reference.adoc b/modules/ROOT/pages/notifications-configuration-reference.adoc index d69376c0f1..647732881e 100644 --- a/modules/ROOT/pages/notifications-configuration-reference.adoc +++ b/modules/ROOT/pages/notifications-configuration-reference.adoc @@ -104,7 +104,6 @@ You can specify the following types of notifications using the `event` attribute * CUSTOM * EXCEPTION * EXCEPTION-STRATEGY -* EXTENSION * MANAGEMENT * MESSAGE-PROCESSOR * PIPELINE-MESSAGE diff --git a/modules/ROOT/pages/on-error-scope-concept.adoc b/modules/ROOT/pages/on-error-scope-concept.adoc index b293fad7a2..a6fbce6e97 100644 --- a/modules/ROOT/pages/on-error-scope-concept.adoc +++ b/modules/ROOT/pages/on-error-scope-concept.adoc @@ -240,4 +240,4 @@ output application/json * xref:mule-error-concept.adoc[Mule Errors] * xref:try-scope-concept.adoc[Try Scope] -* xref:mule-server-notifications[Mule Runtime Engine Notifications] +* xref:mule-server-notifications.adoc[Mule Runtime Engine Notifications] diff --git a/modules/ROOT/pages/package-a-mule-application.adoc b/modules/ROOT/pages/package-a-mule-application.adoc index 47717ff402..e61a9ab074 100644 --- a/modules/ROOT/pages/package-a-mule-application.adoc +++ b/modules/ROOT/pages/package-a-mule-application.adoc @@ -118,7 +118,10 @@ From the command line in your project's folder, execute the package goal: mvn clean package ---- -The plugin packages your application and creates the deployable JAR file into the target directory within your project's folder. + +The plugin packages your application and creates the deployable JAR file into the target directory within your project's folder. + +[NOTE] +If there is a dependency version conflict in your `pom.xml`, the latest version is used. == Create an Application Package to Import into Anypoint Studio diff --git a/modules/ROOT/pages/profiling-mule.adoc b/modules/ROOT/pages/profiling-mule.adoc index 5de05a0a0e..4e396d75d9 100644 --- a/modules/ROOT/pages/profiling-mule.adoc +++ b/modules/ROOT/pages/profiling-mule.adoc @@ -117,21 +117,5 @@ The agent is loaded and is listening on port 10001. You can connect to it from the profiler UI. ---- - -=== Running the Profiler - -Run the profiler by running Mule with the *-profile* switch, for example: - ----- -./mule -profile ----- - -You can add YourKit startup options by entering multiple parameters, separated by commas, for example: + ----- --profile onlylocal,onexit=memory ----- -This integration pack automatically resolves configuration differences for Java 1.4.x and Java 5.x/6.x. - - == See Also https://www.yourkit.com/[YourKit] diff --git a/modules/ROOT/pages/reconnection-strategy-about.adoc b/modules/ROOT/pages/reconnection-strategy-about.adoc index e6e3d562fa..02e50cd937 100644 --- a/modules/ROOT/pages/reconnection-strategy-about.adoc +++ b/modules/ROOT/pages/reconnection-strategy-about.adoc @@ -11,6 +11,9 @@ For example, if an operation in Anypoint Connector for FTP (FTP Connector) fails You can modify this default behavior by configuring a reconnection strategy for the operation. +[NOTE] +Operation executions don't trigger the reconnection strategy configured at the FTP Connector when failing with connectivity errors. In this example, the reconnection strategy applies only to the connection that `ftp:listener` needs to establish with the FTP server to query for new or updated files. + == Configure an Operation Reconnection Strategy You can configure a reconnection strategy for an operation either by modifying the operation properties or by modifying the configuration of the global element for the operation. For example, you can configure a reconnection strategy in an FTP Connector configuration: diff --git a/modules/ROOT/pages/scheduler-concept.adoc b/modules/ROOT/pages/scheduler-concept.adoc index 1b2c83a5ec..af0948fa2e 100644 --- a/modules/ROOT/pages/scheduler-concept.adoc +++ b/modules/ROOT/pages/scheduler-concept.adoc @@ -233,11 +233,13 @@ The Scheduler component also supports Quartz Scheduler special characters: * `/`: Incremental values, for example, `1/7`. * `L`: Last day of the week or month, or last specific day of the month (such as `6L` for the last Saturday of the month). -* `W`: Weekday, which is valid in the month and day-of-the-week fields. +* `W`: Weekday, which is valid in the day-of-month field and must follow a specific day. For example, `15W` for the weekday nearest to the 15th of the month. * `#`: "nth" day of the month. For example, `#3` is the third day of the month. //source info: +http://www.quartz-scheduler.org/documentation/quartz-2.x/tutorials/crontrigger.html+ +See https://www.quartz-scheduler.org/api/2.3.0/org/quartz/CronExpression.html[CronExpression] for more information. + This example logs the message "hello" every second: [source,XML,linenums] diff --git a/modules/ROOT/pages/securing.adoc b/modules/ROOT/pages/securing.adoc index 9ad7806e80..5d4dbaecf8 100644 --- a/modules/ROOT/pages/securing.adoc +++ b/modules/ROOT/pages/securing.adoc @@ -24,21 +24,22 @@ Encrypting configuration properties for your applications involves creating a se See details in xref:secure-configuration-properties.adoc[Secure Configuration Properties] == Cryptography Module -The Cryptography module provides the following main cryptography capabilities to a Mule application: +Cryptography Module provides cryptography capabilities to a Mule application. Its main features include: -* Symmetric and asymmetric encryption and decryption of messages -* Message signing and signature validation of signed messages + +* Symmetric encryption and decryption of messages. +* Asymmetric encryption and decryption of messages. +* Message signing and signature validation of signed messages. -This module supports three different strategies to encrypt and sign your messages: +The module supports these strategies to encrypt and sign your messages: -* xref:cryptography-pgp.adoc[PGP] + -Provides basic signing and encryption. -* xref:cryptography-xml.adoc[XML] + +* JCE + +Provides cryptography capabilities of Java Cryptography Extension. +* PGP + +Provides signing and encryption using Pretty Good Privacy. +* XML + Provides signing and encryption of XML documents or elements. -* xref:cryptography-jce.adoc[JCE] + -Provides the wide range of cryptography capabilities available through the Java Cryptography Extension. + -See details in xref:cryptography.adoc[Cryptography Module]. +For details, refer to xref:cryptography-module::index.adoc[Cryptography Module]. == FIPS 140-2 Compliance Support You can configure Mule 4 to run in a FIPS 140-2 certified environment if you meet the following two requirements: diff --git a/modules/ROOT/pages/shared-resources.adoc b/modules/ROOT/pages/shared-resources.adoc index 906b46af8b..b6e8288204 100644 --- a/modules/ROOT/pages/shared-resources.adoc +++ b/modules/ROOT/pages/shared-resources.adoc @@ -5,7 +5,7 @@ endif::[] :keywords: anypoint studio, shared resources, domains, multiple applications, share ports, domain project :page-aliases: tuning-domains.adoc -When you deploy Mule on premises, you can define global configurations such as default error handlers, shared properties, scheduler pools, and connector configurations to be shared among all applications deployed under the same domain. To do so, create a Mule domain and then reference it from each application. As a result, each app now associated with the Mule domain can access shared resources in the domain. + +When you deploy Mule on premises, you can define global configurations such as shared properties, scheduler pools, and connector configurations to be shared among all applications deployed under the same domain. To do so, create a Mule domain and then reference it from each application. As a result, each app now associated with the Mule domain can access shared resources in the domain. + Note that Mule apps are associated with only one domain at a time. Using domains greatly enhances performance when you deploy multiple services on the same on-premises Mule runtime engine (Mule). By providing a centralized point for all the shared resources, domains make the class-loading process (and, therefore, metaspace memory usage) more efficient, especially because domain dependencies declared in the `pom.xml` file are also shared in the domain apps. @@ -292,4 +292,3 @@ Imagine that you have a set of HTTP Proxy applications that also apply one API G You can greatly improve this outcome by using a domain that shares the backend server configuration, increasing the number of applications to beyond 100 while also experiencing a balanced use of machine resources and consistent good performance. Note that despite the clear performance advantage in using domains, each deployed application adds its own unique complexity to the shared infrastructure resources. To avoid performance impact, before adding an application, identify the overhead by testing each application individually and then test it coexisting with other applications. - diff --git a/modules/ROOT/pages/streaming-strategies-reference.adoc b/modules/ROOT/pages/streaming-strategies-reference.adoc index bd560cab04..44a98f9e32 100644 --- a/modules/ROOT/pages/streaming-strategies-reference.adoc +++ b/modules/ROOT/pages/streaming-strategies-reference.adoc @@ -15,7 +15,7 @@ on these strategies. | `initialBufferSize` | No | No -| 256 +| 512 | Amount of memory allocated to consume the stream and provide random access to it. If the stream contains more data than fits into this buffer, the memory expands according to the `bufferSizeIncrement` attribute, with an @@ -24,7 +24,7 @@ upper limit of `maxInMemorySize`. | `bufferSizeIncrement` | No | No -| 256 +| 512 | Amount to expand the buffer size if the size of the stream exceeds the initial buffer size. Setting a value of zero or lower indicates that the buffer does not expand and that a STREAM_MAXIMUM_SIZE_EXCEEDED error is raised when the diff --git a/modules/ROOT/pages/tls-configuration.adoc b/modules/ROOT/pages/tls-configuration.adoc index 6b88b3144a..957cb8816f 100644 --- a/modules/ROOT/pages/tls-configuration.adoc +++ b/modules/ROOT/pages/tls-configuration.adoc @@ -49,7 +49,12 @@ A well-known Certificate Authority (CA) can generate certificates, or you can ge == Keystores and Truststores -The `tls:trust-store` and `tls:key-store` elements in a Mule configuration can reference a specific certificate and key, but if you don't provide values for `tls:trust-store`, Mule uses the default Java truststore. Java updates the default trust store when you update Java, so getting regular updates is recommended to keep well-known CA certificates up-to-date. +In a Mule configuration, the `tls:trust-store` and `tls:key-store` elements allow you to reference specific certificates and keys. If you don't specify a `tls:trust-store`, Mule uses the default Java truststore, which is maintained by Java and updated automatically when Java is updated. This helps make sure that certificates from Certificate Authorities (CAs) remain current. + +Using a custom Java Trust Store isn't the standard approach in MuleSoft and can lead to connectivity issues with the MuleSoft control plane when MuleSoft certificates are renewed. + +[IMPORTANT] +Use the default Java truststore to avoid connectivity issues. If you use a custom truststore, you're responsible for managing and updating it. Truststore and keystore contents differ depending on whether they are used for clients or servers: @@ -66,7 +71,20 @@ The keystore contains one or two passwords: == Client Configuration -If the `tls:context` is empty (no `tls:key-store` or `tls:trust-store` defined), then the default values of the JVM are used, which usually include a truststore with certificates for all the major certifying authorities. +If the `tls:context` has an empty truststore defined, then the default values of the JVM are used, which usually include a truststore with certificates for all the major certifying authorities. Consider the following scenarios: + +* When the truststore is defined inline: +---- + + + +---- +* When the truststore is defined with a global element: +---- + + + +---- If the client requires a certificate from the server that it is trying to connect to, then add the `tls:trust-store` element. Set `path` to the location of the truststore file that contains the certificates of the trusted servers. @@ -228,7 +246,7 @@ To enable TLS for Mule apps, configure the `tls:context` element in the Mule XML * <> * <> -* <> +* <> Whichever method you use, we recommend you review the information in <> to understand how the attributes of `tls:context` function. diff --git a/modules/ROOT/pages/transform-dataweave-xml-reference.adoc b/modules/ROOT/pages/transform-dataweave-xml-reference.adoc index 005346de47..e0fe31552f 100644 --- a/modules/ROOT/pages/transform-dataweave-xml-reference.adoc +++ b/modules/ROOT/pages/transform-dataweave-xml-reference.adoc @@ -38,7 +38,7 @@ The `` element is the top-level XML tag for the Transform componen == Adding DataWeave Scripts to the Transform Component -You can either type your DataWeave code into your XML using `CDATA` within a <> element, or you can reference an external `.dwl` file. +You can either type your DataWeave code into your XML using `CDATA` within a <> element, or you can reference an external `.dwl` file. This example that writes a DataWeave script inline within a `` transformation element: diff --git a/modules/ROOT/pages/until-successful-scope.adoc b/modules/ROOT/pages/until-successful-scope.adoc index a743655e41..c3306aba88 100644 --- a/modules/ROOT/pages/until-successful-scope.adoc +++ b/modules/ROOT/pages/until-successful-scope.adoc @@ -10,7 +10,7 @@ Until Successful runs synchronously. If any processor within the scope fails to connect or to produce a successful result, Until Successful retries all the processors within it, including the one that failed, until all configured retries are exhausted. If a retry succeeds, the scope proceeds to the next -component. If the final retry does not succeed, Until Successful produces an error. +component. If the final retry doesn't succeed, Until Successful produces an error. Routing is successful if no exception is raised or if the response matches an expression. @@ -31,7 +31,7 @@ You can configure the following attributes in the Until Successful scope: |=== |Field | XML | Description |Max Retries | `maxRetries` |Specifies the maximum number of retries that are allowed. This attribute can be either a number or an expression that resolves to a number. An error message looks like this: `Message: 'until-successful' retries exhausted.` The Mule error type is `MULE:RETRY_EXHAUSTED`. -|Milliseconds Between Retries | `millisBetweenRetries` |Specifies, in milliseconds, the minimum interval between two retries. The actual interval depends on the previous execution, but it should not exceed twice this number. The default value is 60000 milliseconds (one minute). This attribute can be either a number or an expression that resolves to a number. +|Milliseconds Between Retries | `millisBetweenRetries` |Specifies the minimum interval, in milliseconds, between retries. The actual interval depends on the duration of the previous attempt, but it shouldn't exceed twice this value. The default is `60000` (one minute). This attribute can be either a number or an expression that resolves to a number. |=== == Example Configuration of the Until Successful Scope @@ -89,7 +89,7 @@ The next processor after the Until Successful scope executes, in this case, the == Variable Propagation -Every execution of the Until Successful scope starts with the same variables and values present before the execution of the block. New variables or modifications to already-existing variables while processing one element are not visible in the next execution (in case there is an error). If the execution finishes correctly, the variables (and payload) are propagated to the rest of the flow. +Every execution of the Until Successful scope starts with the same variables and values present before the execution of the block. New variables or modifications to already-existing variables while processing one element aren't visible in the next execution (in case there is an error). If the execution finishes correctly, the variables (and payload) are propagated to the rest of the flow. == See also diff --git a/modules/ROOT/pages/using-maven-with-mule.adoc b/modules/ROOT/pages/using-maven-with-mule.adoc index 5721e084f8..9ad97ddac2 100644 --- a/modules/ROOT/pages/using-maven-with-mule.adoc +++ b/modules/ROOT/pages/using-maven-with-mule.adoc @@ -39,6 +39,8 @@ Anypoint Studio's built-in Maven support minimizes the chances that you would ha * Update your POM file and `settings.xml` when necessary. + +When updating your POM file, the `` tag is mandatory. Depending on the type of project you are creating, valid values for this tag are `mule-application`, `mule-domain`, `mule-policy`, and `mule-domain-bundle`. For example, if you create a Mule project using Anypoint Studio, the tag is automatically configured as `mule-application`. ++ If you create Maven projects from the command line using archetypes, you need to manage your POM file manually, and in some cases, adjust your `settings.xml` file to point to the MuleSoft Enterprise repository and supply credentials. In some cases, even if you manage your project with Anypoint Studio, you may need to make manual adjustments to your POM as well. * Use SNAPSHOT version. diff --git a/modules/ROOT/pages/variable-transformer-reference.adoc b/modules/ROOT/pages/variable-transformer-reference.adoc index 9cf7169cda..dfd3ad7f69 100644 --- a/modules/ROOT/pages/variable-transformer-reference.adoc +++ b/modules/ROOT/pages/variable-transformer-reference.adoc @@ -19,7 +19,7 @@ Set Variable provides a way to set the name and value of Mule variable, along wi | Variable Name (`variableName`) | Required -| Name of the variable, which can be a string or a DataWeave expression. Variable names can include only numbers, characters, and underscores. For example, hyphens are not allowed in the name. +| Name of the variable, which must be a string. Variable names can include only numbers, characters, and underscores. For example, hyphens are not allowed in the name. | Value (`value`) | Required diff --git a/modules/ROOT/pages/xa-transactions.adoc b/modules/ROOT/pages/xa-transactions.adoc index 53bd246919..2348428511 100644 --- a/modules/ROOT/pages/xa-transactions.adoc +++ b/modules/ROOT/pages/xa-transactions.adoc @@ -23,10 +23,10 @@ A global XA transaction is a reliable way of coordinating multiple XA resources, To configure the event sources that support XA transactions, please check out the Connectors documentation: -* JMS -* IBMMQ -* VM -* Database +* xref:jms-connector::jms-connector-reference.adoc#parameters-7[JMS] +* xref:ibm-mq-connector::ibm-mq-transactions.adoc#xa-transactions[IBMMQ] +* xref:vm-connector::vm-reference.adoc#parameters-6[VM] +* xref:db-connector::database-connector-xa-transactions.adoc[Database] == Configuring a Try Scope to use XA Transactions