Introduction to Corporate Computing

by Dmitry Kirsanov 4. November 2011 01:31

Imagine, that you have a company with 5 000 employees having 6 000 computers. These could be desktop computers, notebooks, various mobile devices – anything running Windows. And you have 10 people to manage all this hardware and software.

When working in such strict conditions you can’t avoid standardizing everything you can. And having such tiny staff with ratio of 600 machines per IT staff member, you want to automate everything and make the environment to be more reliable and independent from system administrators.

Imagine the situation, when you need to install software to all machines, or perhaps to ½ of machines, which is 3 000 computers anyway. With help of Active Directory you can do it automatically, if the source application contains MSI (Microsoft Installer) file. If it doesn’t, you can execute legacy EXE installation and install it using Microsoft System Center Configuration Manager (SCCM) which is version 2007 at the time this post is written, but we already have version 2012 in RC (release candidate) phase.

However, in both cases you might need to change configuration of installation significantly. Remove automatic updates, icons from desktop, shortcuts in Startup and various screens to welcome new users. You may also want to make it impossible to change the installation but keep the Recovery and Uninstall options. Or perform initial configuration, such as configure your program to use DivX drivers or whatever else you can do once the program is installed.

But how?

That’s what repackaging stands for. You can take anything you want and package it into MSI installation file. That’s packaging. When you take existing MSI package or legacy EXE/BAT/whatever package, and transform it to MSI package, that’s called repackaging.

And it’s quite profitable business.

The reason for it to be a profitable business is mainly because you need to be an expert in systems administration and preferably also in hardware and software development in order to successfully repackage the whole software portfolio your 6K-computers company needs.

And most likely you don’t have such specialist or have better tasks for him, right? So you outsource that business to repackaging company and agree to pay per conversion or per day of work, depending from the volume of work required.

Repackaging these days include not only the conversion, but also testing and analysis of your software. For example, you may submit software you were using for years to repackager, and he will test whether his package and the software itself will work in required target operating system, like Windows 7 x64.

If not – then he will recommend the course of actions to make it work, and there is a correlation between his level and “that’s impossible” answer ratio, as more experienced and skilled repackagers tend to solve problem instead of giving up early.

So, let’s return to our company. Once you’ve got your legacy software repackaged into stable and shiny MSI package, you install it wherever you need using SCCM server. SCCM will make sure that older packages are updated with this one, but it won’t track your licenses for it, if any. So as you can see, there is a whole lot of new concepts for a standard systems administrator to uncover.

If you are installing things like Microsoft Office or Adobe Acrobat Pro in your company, the chances are – you need to make sure you don’t install more copies than you paid for. And you want to track how many of them you have left, who needs them and perhaps allow those who needs to install necessary software without you doing much about it. Remember, with ratio of 600 machines per IT administrator, you only have 48 seconds per day for each workstation.

So there are tools that track licenses, allow people to acquire licenses from the pool and automatically install required software once approved by supervisor, or vice versa – remove software from one machine and distribute it to others.

There are even scenarios, when user visits a homepage in local network, requests new computer pre-loaded with required software, and once the request is authorized by his supervisor, receives new computer. But what is important – the computer comes to your company with blank hard drive, and all you need to know as the system administrator – the MAC address of that new computer and the recipient.

You enter the address into your system, power on the machine and forget about it for next half of hour. In 30 minutes it is ready to work, totally loaded with all required software. Then you switch off the old workstation, switch on the new one, user logs in and can continue working right over.

I deliberately don’t name the software packages that make this happen, so it would be easier to understand, that all of them are working on top of the main layer – the Microsoft System Center Configuration Manager, which I am going to talk about soon.

Anyway, as you can see, a single need to operate as much computers as you can with as smaller IT staff as possible, led to whole new sector in IT market and highly sophisticated products, which you should learn if you are about to pursue a career in large organization as Windows Server systems administrator.

What could be worse than that?

by Dmitry Kirsanov 3. November 2011 05:59

Have you ever think about all the possible things that could happen when you become subject of business espionage through hacking of your server? Either of the whole farm or one and only server you have in your organization? What are the possible scenarios you went through in your fantasies or security planning?

Here is one idea you didn’t go through. Imagine, that your server is hosting installation files for software which is used either on other computers inside of your organization or outside of it. Even funnier – you have part of your network which is separated from the  Internet but still it uses piece of software, whose installation files are stored on compromised machine.

Using technique called repackaging, intruder could change these installation files so you wouldn’t distinguish them from the original ones. They would look and behave identically but would also install Trojan horse. In case of targeted attack this Trojan horse wouldn’t be recognized by antivirus software, as it couldn’t be found on other machines in the Internet.

While very sophisticated, this attack is also very simple to implement and potentially could supply attacker with precious information for years.

I am not aware of any attempts of this kind were implemented ever, so probably could patent it. Too bad, hacking techniques wont be patented. But anyway, we are going to talk about “white hat” repackaging pretty soon, so stay tuned!

Team Foundation Server 2010 Test Manager Lab 5 - Web Load Testing

by Dmitry Kirsanov 1. November 2011 19:39

Next part in a series of Team Foundation Server 2010 Labs. Previous is available here (Lab 4, Test Runs).

This time it’s about Web Load Testing. When you are developing ASP.NET web application, it’s paramount to make sure your application is able to handle the required amount of requests, or at least you need to know the exact cap of your application for scalability planning. If your IT infrastructure is working according to ITIL / ITSM, you need to know what you need to scale out your application and how to do it right.

We are going to find the bottleneck in our application and refactor it when needed, so sudden success of our website will not mean eminent failure (see Slashdot Effect).

This Lab is longest so far (1h 17m) and contains materials which are valuable even if you don’t have TFS installed. You may perform Web Load tests using Visual Studio 2010 Ultimate and virtual machine with IIS installed, but that’s a good topic for another article.

And now – enjoy and don’t forget to watch it in full screen HD!

Team Foundation Server 2010 Test Manager Lab 5 - Web Load Testing

Team Foundation Server 2010 Test Manager Lab 4 - Test Runs

by Dmitry Kirsanov 1. November 2011 05:01

So far we’ve seen a lot of unusual and amazing things in Team Foundation Server 2010, more specifically – in Visual Studio 2010 Test Professional. However, one of the most ground-breaking features of TFS and Visual Studio Test Professional is it’s ability to run automated tests.

By automation we understand performing complex tasks and verifying results of users’ interaction with your application. As you can see in this example.

Enjoy the 4th lab of Team Foundation Server 2010’s Test Manager, and now it’s about Test Runs.

Team Foundation Server 2010 Test Manager Lab 4 - Test Runs

As always, make sure you watch it in Full Screen HD!

Previous lab is available here.

Team Foundation Server 2010 Test Manager–Test Cases and Shared Steps

by Dmitry Kirsanov 31. October 2011 23:33

This screen cast of interactive virtual lab describes the feature of Test Cases and Shared Steps in Team Foundation Server 2010’s Test Manager.

Since there is still no MOC for TFS 2010 (although it has a number of 50430) the only resources you have for now are lectures of TFS gurus and virtual labs. Fortunately, usually that’s enough to pay attention to recorded lab session in order to understand the subject.

So, as always, enjoy the view and don’t forget to switch to Full Screen HD in order to see anything.

Team Foundation Server 2010 Test Manager Test Cases and Shared Steps

Team Foundation Server Quality Assurance Lab 2 - Test Plans

by Dmitry Kirsanov 31. October 2011 22:34

When I learn new material, sometimes it’s enough to me to see system in action to understand the principles behind it’s logic. Especially when it is self-descriptive lab like this one. Team Foundation Server 2010 is very complex but extremely valuable engine to energize your software development division, and one of it’s key features is automated testing.

With TFS automated testing you can automatically deploy virtual machines with required configuration, deploy the latest build of your software and test it for various scenarios. When bug is found, TFS (automatically) creates bug record in it’s centralized system, attaches screencast (video) of the incident and developer can work on solving that bug immediately.

Once you start working with TFS in your .NET software development, you can’t imagine life without it.

This lab is about creating and working with Test Plans, and while  there is no astrophysical concepts in it, the topic is usually hard to understand at first. The reason for that is quite simple – when you need to create schema for actual work, it’s harder to learn then performing the “real” action, as the necessity to do so doesn’t look as obvious, as, say, compiling your application.

Well, enough talking, enjoy the view! (And don’t forget to switch to HD!)

To see part 3 of this lab, regarding Test Cases and Shared Steps, click here.
Previous Lab (Test Manager Overview) is available there.

CAPTCHA sample project

by Dmitry Kirsanov 28. October 2011 23:41

Good Captcha in ASP.NET will make your website free from spamI’ve created a small sample project for Mondor's Captcha (see the full project page here). As some people are asking at the forum about how to implement it in ASP.NET project, I decided to make a quick sample for both development environment and production IIS.

MS CAPTCHA is a free component for ASP.NET which implements “Completely Automated Public Turing test to tell Computers and Humans Apart” – an image which “only people” could read. In case of Mondor’s CAPTCHA, it also contains unique feature called “Arithmetic”, which displays simple formula, so even if bot will be able to read “2 + 2”, it will type “2+2” as an answer, while there should be 4.

More...

ExecuteAs utility

by Dmitry Kirsanov 28. October 2011 01:50

Well, yet another useful command line utility. This one is for system administrators in need to run some command on behalf of another user.

Why bother, if there is a command in Windows, called runas? Well, mainly because it doesn’t accept the password as command line parameter, and also because I wanted to add more features to such simple process.

This is the help text you get when running this program without parameters:

Syntax: executeas.exe /u:UserName /p:Password /x:Priority /d:Domain /a:Affinity /quiet /hide /noprofile /wait /t:60 [program name]
Where t is a timeout in seconds to wait for program completion.
The only required parameter is program name. You can place command line arguments after the program name.

As you can understand, UserName is the user name of target user without the domain name,
Password is his password,
Priority is the priority at which you want this program to be executed (1 – idle, 2 – below normal, 3 – normal, 4 – above normal, 5 – high),
Domain is the domain part of the user name, if needed,
Affinity is how many CPU cores you’d like to use,
Quiet won’t produce any output,
Hide will hide the target program, so it will not be displayed to user,
NoProfile means that profile of target user won’t be loaded,
Wait will wait until the target program will be completed – useful when running from batch file.

So, here it is and have fun!

As  always, .net framework is required for this program to run. This time it’s .net 3.5, which is installed by default on Windows 7 and is available as feature on Windows Server 2008.

ExecuteAs.zip (3.99 kb)

Scary!

by Dmitry Kirsanov 20. October 2011 08:49

Imagine, that you’ve just made a revolutionary change to the project of your life. You’ve spent hours on it and pretty much sure you won’t repeat that the same well if you would lose todays changes.

So you back up your files using file synchronization utility which synchronizes your files with online storage. Live Mesh, for example.

And first thing you see when synchronization of your now-so-precious work folder starts – “Receiving files …”.

Tags:

Personal

What can be done in 12 hours by an average .NET developer?

by Dmitry Kirsanov 20. October 2011 00:49

Have you ever heard about Garage48? That’s an event, which was born in Estonia last year and since then widely adopted by other countries. It’s scenario is such – companies give tasks – “ideas” that have to be coded to production quality in 48 hours. Students are coming to the event and working 48 hours non stop, for food and beverages, and contest winner get’s iPhone.

So, you have an idea and iPhone – now 5 undergrads can make it live, just give them enough hamburgers and Red Bull.

When I heard about it (in the context that one of my colleagues was eager to participate) I wondered what these students would gain from the event? I mean – in my 32 I wouldn’t participate in such event, have no sleep in 48 hours and produce free code for someone. I still didn’t get convincing answer, but from the point of view of the author of “idea”, this is the windfall.

But I’ve got itch to see what am I capable to do in these 12 hours. Having totally nothing but “idea”.

So, I’ve replaced Red Bull with German and Belgian beer, wrote my idea in 3 sentences and sat for 12 hours with my notebook.

I decided to create “URL shortening service”, which had to be functionally better than anything else in the market. Better from my point of view, of course. As I thought about it, and that time was also included in these 12 hours, I wanted this service to have:

  1. WCF backend. So I would be able to create utilities that would add new links automatically, with no user interaction. Also that would allow other tools to use the service and provide additional services.
  2. I would have QRCode for each link. And it would be created automatically.
  3. I could protect link by CAPTCHA
  4. Or by password.
  5. Or require visitor to declare his age, in case link is to age-restricted web resource.
  6. And make links expiring – either by date or redirections count.
  7. Also, I would like users to know where they are redirecting to. But not always. So there would be 4 different ways of redirection. 2 for client and 2 for server side redirection.
  8. I don’t want to have malicious links on that service, so they should be automatically scanned. URLs leading to malicious web resources should be removed.
  9. Overall good web design. Because tinyurl sucks.
  10. Some functions should be only available to registered members, but anonymous could use it as well. However, trusted people would have it unlimited at all.
  11. Engine should be available to 3rd parties, so they could install it and use with their domain name.

That’s pretty much for 12 hours, when there is no base, right? That’s what we say to our customers.

Anyway, in 12 hours of work I had a website which had a name, web design, logo, backend, database, frontend and additional infrastructure up and running in test environment. Debugging it took another 2 hours, and deployment – another 30 minutes.

You can take a look at that service here – http://byte.lv – just make sure you understand it’s beta and some things may glitch. I reserve the right for a bug, yes. Anyway, if you’d like to have this engine working for your company needs – sure you can have it.

 

P.S. Now imagine having 5 developers working 48 hours - that’s 20 times more than I had!


Month List