Skip to content

Commit

Permalink
typos
Browse files Browse the repository at this point in the history
  • Loading branch information
SimonCropp committed May 4, 2016
1 parent 0740fbe commit 682b99c
Show file tree
Hide file tree
Showing 9 changed files with 50 additions and 45 deletions.
30 changes: 15 additions & 15 deletions nservicebus/hosting/cloud-services-host/hosting-options.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,32 +19,32 @@ Azure offers various ways to host applications. Each of these hosting options ca

## General Considerations

Because of the size and service nature of the Azure platform, you cannot rely on distributed transactions in this environment. You cannot rely on any setup that would require distributed transactions, including the MSMQ transport. For details, refer to ['Understanding transactions in Azure'](/nservicebus/azure/transactions.md).
Because of the size and service nature of the Azure platform, distributed transactions in this environment cannot be relied upon. Do not rely on any setup that requires distributed transactions, including the MSMQ transport. For details, refer to ['Understanding transactions in Azure'](/nservicebus/azure/transactions.md).


## Azure Virtual Machines

The Virtual Machines hosting model is similar to any other virtualization technology in the datacenter. Machines are created from a virtual machine template, you are responsible for managing their content, and any change you make to them automatically persists in Azure storage services.
The Virtual Machines hosting model is similar to any other virtualization technology in the datacenter. Machines are created from a virtual machine template, that the content of can be managed, and any change is automatically persisted in Azure storage services.

The installation model is therefore also the same as any on-premise NServiceBus project. Use `NServiceBus.Host.exe` to run the endpoint, or use the `Configure` API to self-host the endpoint, for example, in a website.

The main difference, as outlined above, is that you should not rely on any technology that itself depends on 2PC. In other words, MSMQ is not a good transport in this environment. Instead, use `AzureStorageQueuesTransport` or `AzureServiceBusTransport`. Other options include deploying `RabbitMQ` or another non-DTC transport to an Azure Virtual Machine.
The main difference, as outlined above, is that any technology, that itself depends on 2PC, should not be relied upon. In other words, MSMQ is not a good transport in this environment. Instead, use `AzureStorageQueuesTransport` or `AzureServiceBusTransport`. Other options include deploying `RabbitMQ` or another non-DTC transport to an Azure Virtual Machine.

For more information about enabling the Azure storage queues or Azure Service Bus transports, refer to the following documentation:

* [Azure storage queues](/nservicebus/azure-storage-queues/)
* [Azure Service Bus](/nservicebus/azure-service-bus/)
* [Azure storage queues](/nservicebus/azure-storage-queues/)
* [Azure Service Bus](/nservicebus/azure-service-bus/)

For persistence you can rely on any option, including RavenDB, SQL Server installed on a Virtual Machine, SQL Azure or Azure storage services.
Any persistence can be used in this scenario.


## Azure Websites

Another deployment model is Azure Websites, where you use a regular website and push it to your favorite source control repository (like GitHub). On your behalf, Microsoft takes the latest issue from the repository, builds the binaries, runs the tests, and deploys to production.
Another deployment model is Azure Websites, where a regular website and pushed via a source control repository (like GitHub). Azure then takes the latest issue from the repository, builds the binaries, runs the tests, and deploys to production.

As for an NServiceBus programming model, this is roughly the same as any other self-hosted endpoint in a website. You use the `Configure` API to set things up and it will work.
As for an NServiceBus programming model, this is roughly the same as any other self-hosted endpoint in a website. Use the `Configure` API to set things up and it will work.

The only quirk in this model is that Azure website is built with cheap hosting in mind. By default, its technology puts the website in suspended mode when there is no traffic. This also implies that if you have an NServiceBus endpoint hosted here, it is also suspended and stops processing messages. However, the 'Always on' feature periodically sends requests to the website to keep it active. This feature requires standard mode and is not available in the free edition.
The only quirk in this model is that Azure website is built with cheap hosting in mind. By default, its technology puts the website in suspended mode when there is no traffic. This also implies that if there is an NServiceBus endpoint hosted here, it is also suspended and stops processing messages. However, the 'Always on' feature periodically sends requests to the website to keep it active. This feature requires standard mode and is not available in the free edition.

The advised transports in this environment are `AzureStorageQueuesTransport` and `AzureServiceBusTransport`.

Expand All @@ -55,20 +55,20 @@ To learn more about enabling persistence with Azure storage, refer to [Azure sto

## Cloud Services

The third hosting model available on the Azure platform is 'Cloud Services'. In this hosting model, which is intended for applications with huge scalability demands, you define a layout for the application in a service definition file.
The third hosting model available on the Azure platform is 'Cloud Services'. In this hosting model, which is intended for applications with huge scalability demands, a layout for the application is defined in a service definition file.

This layout is based on a concept called `Roles`. Roles define what a specific set of machines should look like, where all should be identical to what is defined in the role. By default, one NServiceBus endpoint translates to a role, meaning that it will be hosted by multiple identical machines at the same time. You specify how many machines should be in each role when deployed (we advise at least two), but the Azure platform will manage them for you, automatically monitoring and updating the machines.
This layout is based on a concept called `Roles`. Roles define what a specific set of machines should look like, where all should be identical to what is defined in the role. By default, one NServiceBus endpoint translates to a role, meaning that it will be hosted by multiple identical machines at the same time. Specify how many machines should be in each role when deployed (we advise at least two), but the Azure platform will manage them, automatically monitoring and updating the machines.

But it does this in a very particular way! To ensure an identical set of machines (i.e., identical to the role template) it will destroy a machine and install a new one. This means that anything that was on the disk of a machine will be lost! This fact makes any transport or persistence option that relies on a disk unsuitable for this environment, including MSMQ, RabbitMQ, ActiveMQ, SQL Server, RavenDB, and so on.

The advised transports in this environment are `AzureStorageQueuesTransport` or `AzureServiceBusTransport`, and the Azure storage persisters for persistence purposes.

NOTE: It is possible to put Cloud Services and Virtual Machines in the same virtual network, so a hybrid architecture with some of the above transports and storage options might still be suitable (as long as you don't rely on the DTC).
NOTE: It is possible to put Cloud Services and Virtual Machines in the same virtual network, so a hybrid architecture with some of the above transports and storage options might still be suitable (as long as there is no reliance on DTC).

Next to the endpoint, the role definition will also include additional services that are deployed to the role instances, of which the most important for the application are these:

* Configuration system: This system allows you to update configuration settings from the Azure management portal, or through the service management API, and the platform will promote these configuration settings to all instances in the roles without downtime.
* Diagnostics service: This system allows you to collect diagnostics information from the different role instances (application logs, event logs, performance counters, etc.) and aggregate them in a central storage account.
* Configuration system: This system allows to update configuration settings from the Azure management portal, or through the service management API, and the platform will promote these configuration settings to all instances in the roles without downtime.
* Diagnostics service: This system allows to collect diagnostics information from the different role instances (application logs, event logs, performance counters, etc.) and aggregate them in a central storage account.

To integrate these facilities with an endpoint use `NServiceBusRoleEntrypoint` which wires the regular host into a role entrypoint. In addition, there are specific NServiceBus `Roles` (not to be confused with Azure roles) such as `AsA_Worker` in the `NServiceBus.Hosting.Azure` package.

Expand All @@ -81,4 +81,4 @@ The Cloud Services model is best suited to building large scale systems, but in

To support this need to start small a shared hosting option is available using the `AsA_Host` role. In this model, the role entry point doesn't actually host an endpoint itself. Instead, it downloads, invokes, and manages other worker role entry points as child processes on the same machine.

If you want to learn more about the shared hosting options, refer to [Cloud Services - Shared hosting](/nservicebus/hosting/cloud-services-host/shared-hosting.md).
See also [Cloud Services - Shared hosting](/nservicebus/hosting/cloud-services-host/shared-hosting.md).
2 changes: 1 addition & 1 deletion nservicebus/operations/auditing.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ redirects:

The distributed nature of parallel, message-driven systems make them more difficult to debug than their sequential, synchronous and centralized counterparts. For these reasons, NServiceBus provides built-in message auditing for every endpoint. Configure NServiceBus to audit and it will capture a copy of every received message and forward it to a specified audit queue.

It is recommended to specify a central auditing queue for all related endpoints (i.e. endpoints that belong to the same system). By doing so, you can get an overview of the entire system in one place and see how messages correlate. One can look at the audit queue as a central record of everything that is happening in the system. A central audit queue is also required by the Particular Service Platform and especially [ServiceControl](/servicecontrol), which consumes messages from these auditing queues. For more information, see [ServicePulse documentation](/servicepulse/).
It is recommended to specify a central auditing queue for all related endpoints (i.e. endpoints that belong to the same system). This gives an overview of the entire system in one place and see how messages correlate. One can look at the audit queue as a central record of everything that is happening in the system. A central audit queue is also required by the Particular Service Platform and especially [ServiceControl](/servicecontrol), which consumes messages from these auditing queues. For more information, see [ServicePulse documentation](/servicepulse/).


## Handling Audit messages
Expand Down
2 changes: 1 addition & 1 deletion nservicebus/ravendb/upgrades/3to4.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,4 @@ snippet:3to4-acccessingravenfromhandler

### Session is available regardless of features enabled

In Version 3, the `RavenStorageSession` was only registered if at least one out of [Outbox](/nservicebus/outbox/) and [Sagas](/nservicebus/sagas/) were enabled. There are possible use cases for using the NServiceBus wrapped RavenDB session so the prerequisites have been removed.
In Version 3, the `RavenStorageSession` was only registered if at least one out of [Outbox](/nservicebus/outbox/) and [Sagas](/nservicebus/sagas/) were enabled. There are possible use cases for using the NServiceBus wrapped RavenDB session so the prerequisites have been removed.
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@ redirects:
- nservicebus/deploying-nservicebus-in-a-windows-failover-cluster
---

NServiceBus is designed for scalability and reliability, but to take advantage of these features, you need to deploy it in a Windows Failover Cluster. Unfortunately, information on how to do this effectively is, as yet, incomplete and scattered. This article describes the process for deploying NServiceBus in a failover cluster. This article does not cover the generic setup of a failover cluster. There are other, better resources for that, such as [Creating a Cluster in Windows Server 2008](https://blogs.msdn.microsoft.com/clustering/2008/01/18/creating-a-cluster-in-windows-server-2008/) or [Server 2012](https://technet.microsoft.com/en-us/library/dn505754.aspx). The focus here is the setup related to NServiceBus.
NServiceBus is designed for scalability and reliability, but to take advantage of these features, it is necessary to deploy it in a Windows Failover Cluster. Unfortunately, information on how to do this effectively is, as yet, incomplete and scattered. This article describes the process for deploying NServiceBus in a failover cluster. This article does not cover the generic setup of a failover cluster. There are other, better resources for that, such as [Creating a Cluster in Windows Server 2008](https://blogs.msdn.microsoft.com/clustering/2008/01/18/creating-a-cluster-in-windows-server-2008/) or [Server 2012](https://technet.microsoft.com/en-us/library/dn505754.aspx). The focus here is the setup related to NServiceBus.


## Planning the infrastructure

A simple setup for scalability and reliability includes at least two servers in a failover cluster. The failover cluster servers run a distributor process with a timeout manager for each logical message queue.

In addition you have one or more additional servers called worker nodes. These contain endpoints with your message handlers and they are the servers you add more of when you need to scale out. The endpoints on worker nodes request work from the clustered distributors, do the work, and then ask for more.
In addition there must exist one or more additional servers called worker nodes. These contain endpoints containing message handlers and they are the servers that can be scaled out. The endpoints on worker nodes request work from the clustered distributors, do the work, and then ask for more.


## Setting up the clustered service
Expand All @@ -34,7 +34,7 @@ Set up a clustered DTC access point:
Configure DTC for NServiceBus:

1. On each server, in `Administrative Tools - Component Services`, expand `Component Services - Computers - My Computer - Distributed Transaction Coordinator`.
1. For the Local DTC, if the clustered DTC is on the current node, you will see a Clustered DTCs folder with the clustered DTC name inside it.
1. For the Local DTC, if the clustered DTC is on the current node, note the Clustered DTCs folder with the clustered DTC name inside it.
1. For both instances (so three times counting each node and the clustered instance), right-click, select Properties, and switch to the Security tab.
1. At the very least, check "Network DTC Access" and "Allow Outbound."
1. Optionally, check "Allow Remote Clients" and "Allow Inbound."
Expand All @@ -47,7 +47,7 @@ Set up a MSMQ Cluster Group. Cluster group is a group of resources that have a u

For more information, see https://technet.microsoft.com/en-us/library/cc753575.aspx

For NServiceBus endpoint destination, we address the queues by the MSMQ cluster group's name, where we will later add all the rest of our clustered resources. In non-cluster terms, we typically add the machine name to address the queue, i.e. `queue@MachineName`. In cluster terms we address it by queue@MSMQ Network name.
For NServiceBus endpoint destination, queues are addressed by the MSMQ cluster group's name, clustered resources be added will later. In non-cluster terms, typically the machine name is added to the address of the queue, i.e. `queue@MachineName`. In cluster terms it is addressed by queue@MSMQ Network name.

WARNING: These queue(s) must be manually created as they are not created by NServiceBus installation process

Expand Down Expand Up @@ -121,10 +121,10 @@ Do not try starting the services as they will run in the scope of the local serv
Now, add each distributor to the cluster:

1. Right-click the MSMQ cluster group, and select Add a Resource - \#4 Generic Service.
1. Select the distributor service from the list. The services are listed in random order but typing "Distributor" will get you to the right spot if you named your services as directed above.
1. Select the distributor service from the list. The services are listed in random order. Typing "Distributor" will help locate the desired service.
1. Finish the wizard. The service should be added to the cluster group, but not activated. Don't activate it yet!
1. Right click the distributor resource and select Properties.
1. Now this is where it gets weird. You will eventually check "Use Network Name for computer name" and add a dependency, but do not do both at the same time! If you do it will complain that it can't figure out what the network name is supposed to be because it can't find it in the dependency chain, which you told it, but it hasn't been saved yet. To get around it, switch to the Dependencies tab and add a dependency for the MSMQ instance. From there, it finds everything else by looking up the tree. Click Apply to save the dependency.
1. Now this is where it gets weird. Eventually check "Use Network Name for computer name" and add a dependency, but do not do both at the same time! If done at the same time it will complain that it can't figure out what the network name is supposed to be because it can't find it in the dependency chain but it hasn't been saved yet. To get around it, switch to the Dependencies tab and add a dependency for the MSMQ instance. From there, it finds everything else by looking up the tree. Click Apply to save the dependency.
1. Switch back to the General tab and check the "Use Network Name for computer name" checkbox. This tells the application that `Environment.MachineName` should return the cluster name, not the cluster node's computer name. Click Apply.
1. Repeat for the other distributors.

Expand All @@ -141,9 +141,9 @@ Set up the worker processes on all worker servers (not the cluster nodes!) as wi

Configure the workers' `MasterNodeConfig` section to point to the machine running the distributor as described on the Distributor Page under [Routing with the Distributor](distributor).

With the distributors running in the cluster and the worker processes coming online, you should see the Storage queues for each process start to fill up. The more worker threads you have configured, the more messages you can expect to see in each Storage queue.
With the distributors running in the cluster and the worker processes coming online, note the Storage queues for each process start to fill up. The more worker threads configured, the more messages will appear in each Storage queue.

While in development, the endpoint configurations probably don't have any @ symbols in them, in production you have to change all of them to point to the Data Bus queue on the cluster, i.e., for application MyApp and logical queue MyQueue, the worker config looks like this:
While in development, the endpoint configurations probably don't have any `@` symbols in them, in production change to point to the Data Bus queue on the cluster, i.e., for application MyApp and logical queue MyQueue, the worker config looks like this:


```XML
Expand All @@ -166,6 +166,7 @@ While in development, the endpoint configurations probably don't have any @ symb
</configuration>
```
## Conclusion
This article shows how to set up a Windows Failover Cluster and one or more worker node servers to run a scalable, maintainable, and reliable NServiceBus application infrastructure.
Expand Down
Loading

0 comments on commit 682b99c

Please sign in to comment.