Awesome Talk this week at Forth Worth User Group. Below are the links to the content.
http://sdrv.ms/13MJ8LX – PowerPoint
https://github.com/HighwayFramework/Highway.Data – Code Found here
Yesterday at AgileDotNet, before one of Tim’s sessions, we were discussing books which every software developer should read with the room, and particularly with a bunch of SMU students who came down to Houston to attend. I promised I would post the list of those books to my blog, so here they are:
The Must Read List (in Order)
Books You Should Read & Own Eventually
This is the first month that I spent focusing on being a learner in a specific area of my life.
I selected this area so that I could challenge several closely held opinions on the workflow of a productive software developer.
Tim Rayburn – PowerShell, CodeRush, and many more
Amir Rajan – Ruby, SpecFlow, and general workflow optimization
Amir and I started by looking at my current workflow that includes some major tooling and customization. I use Resharper with close to 3000+ custom templates, have rebound most keystrokes in Visual Studio, use SpecFlow for acceptance testing, and use NCrunch to run my tests continuously as I program.
Amir has a similar set of workflow but massively different tooling. Amir uses CodeRush, NSpec, and SpecWatcher for continuous testing. Amir however has taken this a lot further, he has rake commands that will start up IIS express, deploy the web application, reset the database, and many other things built into what he calls “SideKick”. IF only we all had a sidekick like that.
I have spent the last 4 weeks testing my environment and trying that workflow. It has been amazing. I have come to like ruby and rake for that, but think with a bit of work PowerShell would be better. I have also come to realize that I was not thinking about the possibilities in broad enough scope. If something we have to do often takes more than a couple of seconds to do, automate it. We are programmers, and if we saw our users doing tons of manual things over and over we would try to fix that.
This has lead me to not hold as many opinions about tooling, but to hold one larger opinion. We need to realize that anything is possible and spend just a bit of time making sure we have the tools we need.
Next Month ( by choice of my trusted coach ) – Reflective Prayer
As part of a coaching and mentoring program I was introduced to the idea of “Learner vs Knower”. ( http://conversationkindling.blogspot.com/2009/04/are-you-knower-or-learner.html – Good blog post to get you started on understanding those ideas.) I found that on a 5L to 5K scale, I tended to be 4-5k a lot. This is something I would like to temper in an effort to gain the benefits of being a learner more often. This gets me to this blog post.
April 21st, 2013 – I decided to spend the next year focusing on self improvement.
Month 1 – Software Tools – ( Self Selected )
Month 2 – Month 12 – To be selected by a good friend and trusted advisor Tim Rayburn.
So here goes everything!
AgileDotNet 2013 – Houston "The Ascension" is here! Improving Enterprises in conjunction with Microsoft will bring together the world of .NET development with the world of Agile methods for an exciting experience of discovery, learning, and exchange.
We have all new tracks and fresh content! See conference details here: agiledotnet.com
AgileDotNet – Houston REGISTER NOW! Just $149.00 per person (includes breakfast, lunch, snacks)
Where: Minute Maid Park (http://houston.astros.mlb.com)
When: Friday, May 17th, 2013
Time: 7:30am – 5:30pm
Agile ALM with TFS Workshop REGISTER NOW! Just $25.00 per person (includes lunch)
Where: Improving Houston – 4710 Bellaire Blvd., Suite 305 Bellaire, TX 77401 (improvingenterprises.com)
When: Saturday, May 18th, 2013
Time: 9:00am – 5:00Pm
I look forward to seeing you there!
I got a link from a friend to this post (http://ayende.com/blog/4784/architecting-in-the-pit-of-doom-the-evils-of-the-repository-abstraction-layer). I read it and can see the points made, but I disagree with it in multiple places. So in the interest of starting a great conversation with some really smart guys like Ayende and Amir Rajan.
I am going to skip over the overview that Ayende does in the first few paragraphs. The architecture overview is fairly on point and accurate.
1.The whole purpose of a repository is to provide an in memory collection interface to a data source,
I would disagree about the purpose of the repository abstraction. The purpose is both to give a in memory collection interface to data, and to give a testable abstraction over data. I understand that NHibernate and Entity Framework both give a in memory collection based abstraction, but they are both opinionated in how they structure the unit of work and the access to data. If I want to deal with data in a way that abstracts the base implementations away I cannot take those opinions. I understand that the normal argument is that I should chose a data access architecture up front and stick with it. That however excludes that idea that I will be dealing with multiple version of data access in a single application. Something like a non relational store for front side caching, and a relation store for transactional recording. In this case having a standard application architecture for dealing with querying and returning data matters for testability, maintenance, and consistency.
2. IQueryable<T> –
On this we agree. I return IEnumerable<T> to keep from explosion of joins onto returned queries.
3. Specification not pulling it’s weight –
I would agree that the example that is used is anemic. The specificate really shines when it gives you the ability to step around issues that would normally mean that I have to depart from the standard querying syntax. Something like the examples here, or the examples here.
The other portion is allowing you to both test and reuse queries. I am not talking across types, but across a single type. With paging, ordering and sorting as specification extensions you extend a single query object.
The specification starts to shine when you do things like pre/post query interception, projection, SQL logging, and many other extensions that are not surfaced in every data access implementation.
4. Worse than that, this sort of design implies that there is a value in sharing queries among different parts of the application. And the problem with that sort of thinking is that this premise is usually false.
I re-use queries in multiple applications and find that it has a lot of power. I find that the advantages to testability. It takes a well formed abstraction that includes projections, sorts, filters, paging, and included paths. This allows you to compose extensions onto a tested query without having to re-write the same LINQ each time. This allows for performance tweaks to badly written queries without having to hunt down varying different implementations of similar queries.
5. Reading from a database is a common operation, and should be treated as such.
I agree, but I think I have a different idea of how to treat common operations. I want my common operations to be fast, testable, and easily changed. I cannot look at database operations *that have far reaching performance implications* as something that can be ignored and passed off under an approach that cannot be unit tested in isolation.
6. The next portion talks about how the abstraction falls down.
I would say that all abstractions fall down when you need some specific piece of the underlying architecture to solve a problem. The answer is not to re-architect or re-implement the behavior, but to know where the abstraction needs to stop. In Highway we decided to allow for the scenario Ayende outlines by giving you the choice to hit the underlying implementation if you need. Check out *the advanced queries *
7. Avoid needless complexity
I agree here, needless complexity should be avoided. I disagree that we should take that to mean that we don’t need any constraints around how we pull data. In a world that is becoming more and more data centric we should want more performance, testability, and extensibility not less.
I was sitting today in the Professional Scrum Developer Train the Trainer event when the topic of technical debt/building technical wealth the Dave Ramsey way came up. I loved this analogy, and wanted to flesh this out. There are some standard approaches here that Ramsey frames very well for dealing with building financial wealth. Those details are here. Below is both Dave Ramsey’s approach, and my version of those applied to the technical debt issues.
I define technical debt as: “The eventual consequences of continually trading craftsmanship for short term velocity”. It is however more than that for legacy code, it is all the built up issues with a code base.
I define technical wealth as the flexibility of a solution to change as it delivers value to the customer without massive change costs.
Baby Step 1
Build up a Safety Net of Tests
A safety net of some basic acceptance tests focusing on the main business process flows will give you a thin safety net to start improving the code base. These basic test should be automated. If you have to use manual testing, be sure to take time in step 3 to automate these. Learn more
Baby Step 2
Fix problems inside the safety net
List out your currently known issues by area. Keep this list visible and when a PBI for the current sprint touches an area with one of those issues, incrementally work on this issue. Make sure you have a safety net in this area . Learn more
Baby Step 3
Build Code coverage through changes
Once you have understood and implemented the first two baby steps, you need to start increasing the code coverage by including unit testing in every change. When we touch the code base we have to wrap what we intend to change in a test, and then make the change. Learn more
Baby Step 4
Invest in Automate Behavior Testing
When you reach this step, you have built up some solid applications, and made some positive improvements. We want to continue to build technical wealth by adding behavior tests and limiting the implementations to the business value requested. Learn more
Baby Step 5
Invest in skill sets and the future
By this point, you should have already started Baby Step 4—Investing in Automated Behavior Testing—before you invest in skill sets. Whether you are working on your skill sets or that of other team members, you need to do it now. This can be reading blogs, podcasts, or books Learn more
Baby Step 6
Design for the future, implement for today
Now it’s time to begin chunking your time and intelligence toward the enhancement of the application to expand both technical abilities of the system, and the value you are being able to deliver. This is learn practices and patterns for the growth of capabilities. Learn more
Baby Step 7
Build wealth and give!
It is time to build great business velocity and continue to leave a great code base for the application! Learn more
Taking these steps helps us accomplish the goals of our business, increase our morale, and live the life of good software craftsmen.
Lets wrap up with the steps. ( Links from Dave Ramsey )
Baby Step 1
$1,000 to start an Emergency Fund
An emergency fund is for those unexpected events in life that you can’t plan for: the loss of a job, an unexpected pregnancy, a faulty car transmission, and the list goes on and on. It’s not a matter of if these events will happen; it’s simply a matter of when they will happen. Learn more
Baby Step 2
Pay off all debt using the Debt Snowball
List your debts, excluding the house, in order. The smallest balance should be your number one priority. Don’t worry about interest rates unless two debts have similar payoffs. If that’s the case, then list the higher interest rate debt first. Learn more
Baby Step 3
3 to 6 months of expenses in savings
Once you complete the first two baby steps, you will have built serious momentum. But don’t start throwing all your “extra” money into investments quite yet. It’s time to build your full emergency fund. Learn more
Baby Step 4
Invest 15% of household income into Roth IRAs and pre-tax retirement
When you reach this step, you’ll have no payments—except the house—and a fully funded emergency fund. Now it’s time to get serious about building wealth. Learn more
Baby Step 5
College funding for children
By this point, you should have already started Baby Step 4—investing 15% of your income—before saving for college. Whether you are saving for you or your child to go to college, you need to start now. Learn more
Baby Step 6
Pay off home early
Now it’s time to begin chunking all of your extra money toward the mortgage. You are getting closer to realizing the dream of a life with no house payments. Learn more
Baby Step 7
Build wealth and give!
It’s time to build wealth and give like never before. Leave an inheritance for future generations, and bless others now with your excess. It's really the only way to live! Learn more
I work on a lot of code and trying to coordinate folks over several states part time is horribly tough, but it just got a bit easier. The announcement of Git support for TFS Services ( Brian Harry’s Announcement ) was generally well received. I got invited to a “drink in TFS’s honor” party to celebrate. I like that the community is receiving this announcement with open arms because it marks a easier distributed story for TFS users.
I have been looking at a different view of TFS with Git. I love Git for my open source code base, and for my private source that is distributed. For up to 5 contributors you can now have a free private git at http://www.visualstudio.com . Yep, you heard right, free private Git. I have been converting some of my github project to TFS ( even the open source ones ) for one big reason. Work items!! I mean it have made my life so much easier to be able to task break down something, put up the board and away it goes. This has been a serious boon for the 5 guys I write a ton of code with because we don’t have to punt to another tool for this anymore.
I started by adding my Team Project to my TFS Hosted account.
Select that pretty orange button with the plus Git.
This sets up all the ALM goodness of work items and tracking with the git clone goodness that is Git.
Fill in the project information and select your template. I am a bit of a zealot with the Scrum template for my projects.
Once you have this setup, you will get a project stood up for you. My was Highway.Data, and while I’ll be pushing versions back to Github, all my main development has now been moved to TFS Hosted.
I openned up my bash prompt and swapped over to my git for Highway.Data. Pulled latest from Github, removed origin, added TFS Hosted as origin and pushed. That was it. It was that easy, but now I get all the Work item goodness that is TFS with the distributed support of Git.
Now I get project backlog tracking and grooming easily.
I also get Scrum Boards, bug reports, and release burn-up out of the box! All my nice team metrics for our open source guys.
Oh, and did I mention the hardest thing in open source is knowing how much time you have from people? Not a big problem since we can set our capacity in TFS which lets us pick up items we can accomplish.
Hope it helps.
I got a call last night from a good friend, Jay Smith. He was wondering about some multi-targeting work we had done on Highway.Data. I walk him through it, and he gave me a not so subtle hint that it would make a good blog post. So here I am
Start off with you .*proj file. We want to work on a couple of pieces.
First we want to add multiple outputs by framework to the build. So we need to setup multiple drop folders. The highlighted lines set the target framework and also set the output path to include the framework. This will output something like this.
“c:\SomeLocation\YourSolution\Bin\v4.0\your build results”
“c:\SomeLocation\YourSolution\Bin\v4.5\your build results”
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<SolutionDir Condition="$(SolutionDir) == '' Or $(SolutionDir) == '*Undefined*'">..\..\</SolutionDir>
Now we want to build for both framework 4.0 and 4.5. The below lines in the Afterbuild target check for the “other” version and set the target to “their” version. The key is the RunEachTargetSeparately attribute. Now just be aware you are going to compile twice here.Do this only if you want to specifically support both versions. Assembly redirection is simpler if you are not using new features that don’t exist in the other version.
<MSBuild Condition="'$(TargetFrameworkVersion)' != 'v4.0'" Projects="$(MSBuildProjectFile)" Properties="TargetFrameworkVersion=v4.0" RunEachTargetSeparately="true" />
<MSBuild Condition="'$(TargetFrameworkVersion)' != 'v4.5'" Projects="$(MSBuildProjectFile)" Properties="TargetFrameworkVersion=v4.5" RunEachTargetSeparately="true" />
We also want to conditionally include files that use the features specific to the new framework so that we don’t get reference errors. We do this on an include basis. The below code will only include a file for compilation if the target framework is 4.5
<Compile Condition="'$(TargetFrameworkVersion)' == 'v4.5'" Include="Interfaces\IAsyncRepository.cs" />
I hope this helps. If you see something I missed or have a better way to accomplish this please let me know.
Looking for great agile training with industry experts? Hunting questions to help your everyday work? Climbing the agile mountain and stuck? Not to worry, AgileDotNet is coming!
AgileDotNet is the bridge between the world of .Net development and Agile methods, built by agilists passionate about delivering superior content in unique settings.
This will be the fourth year of AgileDotNet, and the content will continue to not disappoint! AgileDotNet brings developers, QA, scrum masters, project managers, and business leaders to empowering and unique sessions across four tracks. Attendees return to their workplace with the tools, motivation, and support to be more agile both as an individual, and as part of a team.
#adn13 is different from those past. Despite maintaining a high bar for great workshops and discussions, we realized there was a common theme among many of the most steadfast agile coaches and leaders trying to bring change within their enterprise. The enterprise is difficult to change. Budgets, risks, unfamiliar territory, and planning are all excuses that point to one thing. The enterprise has trust issues.
At #adn13, we will break the trust barrier down. With a wrecking ball. You will learn from passionate field-tested agilists how to establish trust with the team, amongst management, and with the organization as a whole. Regardless what role you play.
As if the conference is not cool enough already with agile experts, Scrum experts, and FOOD TRUCKS! Improving decided to up the game another notch. After the conference on Friday, there is a Saturday workshop at the Improving Offices. This is just crazy!
What? You haven’t registered? Quick, jump over to the registration, and do that now!!
Improving Enterprises is pleased to offer an Agile ALM with Microsoft Team Foundation Server Workshop. In this hands-on, mentoring workshop, we will dive into how to manage the Development process, the Quality Assurance process, and the Project Management process. We will have a full TFS environment, interactive labs, and instructors on hand for questions.
We will be breaking the day into the three segments. Each section will include a free form section to bring your problems to the ALM Team at Improving and get some much needed answers.
*Development lifecycle management is all about getting a streamlined routine that doesn’t hinder velocity and contributes to quality code. We will walk through taking code through develop, test, and deploy.
*Agile Project Management can be quite challenging, managing multiple projects, even more so. We will dive into how to set and measure effective KPIs, automate reporting, and manage work items in an effective and logical way.
*Quality Assurance management is encapsulated in an agile environment to being able to effectively and quickly report on the status of the product. We will walk through defining test cases, writing test steps, recording automation, and enabling regression testing.
Join us on March 2nd and discover best practices around Agile ALM and TFS and don't forget to bring your real-world problems for our on-site mentors!
See you there!