Skip to content

Commit

Permalink
Resolves #3 Octopus Deploy SSRS Post
Browse files Browse the repository at this point in the history
  • Loading branch information
Sean Bedford committed Feb 13, 2015
1 parent 79e11e4 commit 8fb91a1
Show file tree
Hide file tree
Showing 5 changed files with 73 additions and 112 deletions.
5 changes: 4 additions & 1 deletion _config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ baseurl: ""
# !! You don't need to change any of the configuration flags below !!
#
markdown: redcarpet
highlighter: rougue
highlighter: pygments
permalink: /:title/

# The release of Jekyll Now that you're using
Expand All @@ -53,3 +53,6 @@ exclude:
- README.md
- sbedford.github.io.sublime-project
- sbedford.github.io.sublime-workspace

redcarpet:
extensions: ["no_intra_emphasis", "fenced_code_blocks", "autolink", "tables", "strikethrough", "superscript", "with_toc_data"]
79 changes: 6 additions & 73 deletions _posts/2015-02-08-git-jira-releasenotes.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,7 @@ At least that was my first thought more than 3 years ago...strange decision I kn

The main reason was that I was, and still am, using the Git bash shell and wanted an easy way to just

```bash
genReleaseNotes.sh v1.0.0.0 v1.0.1.0
```
{% gist 860429cb21bdb1db586e execute.sh %}

After much mucking around with redirecting STDOUT to files for sorting, searching and filtering (sed, greg, awk) I got it working and moved on.

Expand Down Expand Up @@ -54,51 +52,15 @@ The only problem that it didnt work, at least [not straight away](/open-source-d

Using LibGit2Sharp to extract the commits between the tags was pretty simply - find the tags, exectute a search and then consume the data. The only gotcha was the ordering of the search parameters as you're walking backwards from the current HEAD of the repository:

```c#
var previousVersionTag = repo.Tags[options.PreviousVersion];
var currentVersionTag = repo.Tags[options.CurrentVersion];

var filter = new CommitFilter()
{
Since = currentVersionTag,
Until = previousVersionTag
};

var commits = repo.Commits.QueryBy(filter).ToList();
```
{% gist 860429cb21bdb1db586e commits.cs %}

Once I had the commits I was interested in extracting the Jira tickets was a snap (take that Awk!) and in no time I was using the JiraRestClient to retrieve the data I needed from the [Jira REST API](https://docs.atlassian.com/jira/REST/latest/).

```c#
var referencedJiras = from commit in commits
where regex.IsMatch(commit.Message)
select regex.Match(commit.Message);

var jiraClient = new TechTalk.JiraRestClient.JiraClient<IssueFieldsWithcomponent>(options.JiraServer, options.JiraUserName, options.JiraPassword);

foreach (var jira in referencedJiras) {
var issue = jiraClient.LoadIssue(referencedJira.Value);
}
```
{% gist 860429cb21bdb1db586e Jiras.cs %}

JiraRestClient comes with a predefined set of fields which are retrieved when you interrogate the API, however the list of Components (which we use to classify different areas of our systems) is not one of them. Luckily, the API allows you to extend the [IssueFields](https://github.com/techtalk/JiraRestClient/blob/master/TechTalk.JiraRestClient/IssueFields.cs) class to de-serialise any of the additional information which is returned by the REST API:

```c#
public class IssueFieldsWithcomponent : IssueFields
{
public IEnumerable<Component> components { get; set; }

public string customfield_10400 { get; set; }

public Issue<IssueFieldsWithcomponent> RelatedEpic = null;

public IssueFieldsWithcomponent()
: base()
{
this.components = new List<Component>();
}
}
```
{% gist 860429cb21bdb1db586e IssueFieldsWithcomponent.cs %}

The final piece of the puzzle was to export the data into a word document so it can be issued - so it was back to the old faithful the OpenXml SDK. Again, if you haven't played with this before go and have a look, it's a really neat framework for manipulating or creating documents or document fragments and it isn't limited to just Word documents.

Expand All @@ -112,38 +74,9 @@ The tricky bit when dealing with OpenXml for me has always been tables - there i

My solution was to bind the columns of the table to the property names of the object through the text contained in the heading column:

![Word Document Template Binding](/images/2015-01-08-git-jira-releasenotes-jiraticket.png "[Word Document Template Binding")

```c#
public class JiraIssueModel
{
public string Reference { get; set; }

public string Title { get; set; }

public string Description { get; set; }

public string Components { get; set; }

public string Epic { get; set; }

public JiraIssueModel(Issue<IssueFieldsWithcomponent> jira)
{
this.Reference = jira.key;
this.Title = jira.fields.summary;
this.Description = jira.fields.description;

StringBuilder c = new StringBuilder();
jira.fields.components.ForEach(comp => c.AppendFormat("{0},", comp.name));
this.Components = c.ToString();
if (this.Components.Length > 0)
this.Components = this.Components.Substring(0, this.Components.Length - 1);
![Word Document Template Binding](/images/2015-02-08-git-jira-releasenotes-jiraticket.png "[Word Document Template Binding")

if (jira.fields.RelatedEpic != null)
this.Epic = string.Format("{0} - {1}", jira.fields.RelatedEpic.key, jira.fields.RelatedEpic.fields.summary);
}
}
```
{% gist 860429cb21bdb1db586e JiraIssueModel.cs %}

This solution works reasonably nicely as it allows the columns to be re-ordered or additional columns added without having to changed the script (unless we need to pull down more data from Jira)

Expand Down
101 changes: 63 additions & 38 deletions _posts/2015-02-12-SSRS-Deployment-Automation.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,59 +4,84 @@ title: SSRS Deployment Automation with Octopus Deploy
byline: How we use Octopus Deploy to automate the release of our SQL Server Reporting Services solution
---

We use SQL Server Reporting Services to provide real-time reporting to the our clients business units spread across the country. Each of these business units have their own data warehouse which the reports need to query - i.e. the Brisbane BU has its own database adn the Sydney BU its own database.
We use SQL Server Reporting Services to provide real-time reporting to our clients business units across Australia.

Anyone who has ever done any SSRS development knows that deploying these reports manually is a real pain in the backside. Nothing is more painful than trying to upload a large number of files through the clunky SSRS web interface, though manually executing database scripts across multiple databases is pretty ordinary too.
Anyone who has ever done any SSRS development knows that deploying these reports manually is a real pain in the backside - nothing is more painful than trying to upload a large number of files through the clunky SSRS web interface.

The first solution which people usually move to is automating the deployment through Powershell scripts.
It's even worse when you include though having to manually execute database scripts across multiple databases!

This was the first step we took, however having to manually run the scripts along with ensuring that the databases were updated with the latest versions of the sprocs and functions was too tedious over time.
The first solution which people usually move to is automating the deployment through Powershell scripts which was also the first step we took, however having to manually run the scripts along with ensuring that the databases were updated with the latest versions of the stored procedures and functions was too tedious.

Complete deployment automation was the goal - we wanted a totally hands free process which we could reliably and repeatably run in all environments and [Octopus Deploy](http://www.octopusdeploy.com) allowed us to achieve this.
Complete deployment automation was the goal - we wanted a totally hands free process which we could reliably and repeatably run in all environments. As as we were already using the awesome [Octopus Deploy](http://www.octopusdeploy.com) to deploy the other applications in our solution it was the logical choice to drive this.

### Packaging

Octopus Deploy requires a nuget package which contains the artefacts to be deployed. The contents of this nuget package is defined through a nuspec file in the repository

```xml
<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
<metadata>
<id>Reports</id>
<title>Report PAckage</title>
<version>1.0.0</version>
<authors>Ajilon</authors>
<owners>Ajilon</owners>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>Contains all SQL Server Reporting Services reports and scripts required to create a fully functional environment</description>
</metadata>
<files>
<file src="*.rdl" target="Reports" />
<file src="..\sql\*.sql" target="Sql" />
<file src="..\sql\Scripts\*.sql" target="Sql\Scripts" />
<file src="..\sql\Scripts\Dependencies\*.sql" target="Sql\Scripts\Dependencies" />
<file src="*.ps1" target="" />
<file src="..\*.ps1" target="" />
</files>
</package>
```

Using our existing TeamCity installation, I created a new build configuration to retrieve the SSRS project from GitHub and build the nuget package according to the nuspec using nuget.exe (phew)
Octopus Deploy requires a nuget package which contains the artefacts to be deployed which are defined by a nuspec file in the repository.

{% gist 0693ab338a86aed34d98 Reports.nuspec %}

The deployment process relies on a specific structure to allow, for example, the SQL functions to be deployed before the SQL Stored Procedures which may depend on them

* Reports - contains all SSRS artefacts to upload
* SQL\Scripts - contains the stored procedures to be deployed to each business unit database
* SQL\Dependencies - contains the functions or other SQL artefacts which must be deployed before the contents of the SQL\Scripts directory.

Using our existing TeamCity installation, I created a new build configuration to retrieve the SSRS project from GitHub and build the package according to the nuspec. and trigger a deployment to our _Development Testing_ environment.

![TeamCity Build Configuration](/images/2015-02-12-teamcity-config.png "TeamCity Build Configuration")

### Deployment

The deployment process leverages two powershell scripts invoked from the default.ps1 Octopus script. The process is illustrated below and requires a number of configuration variables which are supplied by Octopus depending on the environment which is being deployed to - neat!
The deployment process leverages two PowerShell scripts invoked from the default.ps1 Octopus script.

The first script, [DeployReports.ps1](https://gist.github.com/sbedford/0693ab338a86aed34d98#file-deployreports-ps1) is responsible for deploying the reports and data sources into SSRS. The second script, [DeploySQL.ps1](https://gist.github.com/sbedford/0693ab338a86aed34d98#file-deploysql-ps1) is responsible for pushing database scripts out to ensure that the database version matches the report version.

The Octopus deployment process uses a number of variables to drive the deployment:

* SSRSReportServerUrl - contains the full url to the SSRS web service i.e. http://server/reportserver/reportservice2005.asmx
* SSRSReportFolder - the root folder to deploy the reports into (we use the project name)
* SSRSSharedDataSourcePath - the path to a Shared DataSource required by some of our reports. It is expected this is already setup and configured (this is the only manual step)
* SSRSDynamicDataSourceCredentialsUsername - the username to connect to the business unit database with
* SSRSDynamicDataSourceCredentialsPassword - the password for the business unit database user.
* nEDRMappingConnectionStrings - a comma separate string of Oracle ezConnect strings used to connect to SQLPlus* with.

#### Deploying Reports

Report deployment is fairly straight forward - use the web service endpoint exposed by SSRS to setup the folder structures, upload all reports and finally configure each reports data sources.

As we share a single SSRS server across a few environemnts (DEV and UAT), our deployment process ensures that when we are deploying to the _Development Testing_ environment, the reports in the _UAT_ environment are not affected.

We do this by using the name of the _Octopus Deploy Environment_ we are deploying into and creating the appropriate directory structure off that:

![Octopus Deployment Process](/images/2015-02-12-octopus-process.png "Octopus Deployment Process")
{% gist 0693ab338a86aed34d98 Report_Folder.ps1 %}

The process relies on a naming convention for identifying the data sources as different actions are required for the business unit databases as opposed to a centralised database in use by our core application.
Once the reporting folders have been configured it's time to upload each of our report definition files.

Updating the datasources is required for two reasons:
{% gist 0693ab338a86aed34d98 Report_Upload.ps1 %}

1. Credentials in an embeeded data source are not saved when uploading an RDL - also, we're not using the same credentials in development and production environments are we?
While you can run the reports now - SSRS will crash and complain about the datasources because

1. Credentials in an embedded data source are not saved when uploading an RDL - also, we're not using the same credentials in development and production environments are we?
1. The link between a Report and a Shared Datasource must be re-established before the report can be run.

* The _dsDynamic_ data source is an expression-based data source which uses a parameter passed into the report to lookup the connection from Oracles tnsnames.ora
* The _dsTas_ data source is a shared data source which is expected to exist in a configurable directory on the reporting server. This is the only manual step required in the installation and is required as the link between the Report and the SD
To fix this we need to retrieve the report from the server and specifically modify the data sources using variables provided by Octopus Deploy:

{% gist 0693ab338a86aed34d98 Report_Datasource.ps1 %}

#### Deploying SQL Procedures and Functions

The script for deploying our SQL atefacts is responsible for 3 things:

1. Deploy everything in the \SQL\Dependencies folder
1. Deploy everything in the \SQL folder
1. Insert a new record into a VersionInfo table (thanks [FluentMigrator](https://github.com/schambers/fluentmigrator)!) to record the current version of the reports.

As mentioned before, our deployment process deploys to multiple databases in series; this is done by adding a CSV string of SQL Plus commands as an environment specific variable in octopus deploy.

{% gist 0693ab338a86aed34d98 DeploySQL.ps1 %}

(Admittedly, this script is a little clunky and while it works for us, I haven't tested it in any other configurations that how we are using it.)

###Conclusion

I really should rewrite those PowerShell scripts in ScriptCS hey!
Binary file removed images/2015-02-12-octopus-process.png
Binary file not shown.

0 comments on commit 8fb91a1

Please sign in to comment.