Deploying Windows 7 by Using Windows AIK

by Dmitry Kirsanov 9. November 2011 11:20

Another aspect of corporate systems administration is ability to deploy anything and everything at once without even leaving your chair. In Windows world, we had that ability from Windows 2000 and it evolves with every new version of operating system.

One of the key tools to install the operating system itself is Windows Automated Installation Kit (AIK). Windows is using so called “answer file” to not ask you for things with known answers. And it’s not only serial number, user and computer name, but also partitions, drivers and other things that could take hours to install and configure otherwise.

As a potential scenario of deployment, imagine that you’ve just received a 100 new computers from hardware vendor. 100 brand new machines with no operating system installed, as you will use Windows 7 corporate – version which you can’t buy at local store. Your task is to install it as soon as possible – it’s Friday evening and you don’t want to waste your weekend on it.

So you prepare the image of one machine and deploy it on all other machines using local network. Very simple thing to do when you know what you are doing.

The following walkthrough lab is from the Microsoft Official Curriculum 6294A: Planning and Managing Windows 7 Desktop Deployments and Environments. It shows you how to create bootable media with image of your reference workstation and deploy it on other machines. Enjoy!

Deploying Windows 7 by Using Windows AIK

Microsoft System Center Configuration Manager 2007 LAB 1 / 13

by Dmitry Kirsanov 8. November 2011 15:13

One of the most sophisticated and complicated products made by Microsoft, System Center Configuration Manager (SCCM) is absolutely irreplaceable thing for corporate systems administration. It requires a lot of resources, a lot of knowledge and a lot of care to install and manage, and theory alone is insufficient to become proficient with it.

So here it goes – the first lab in a series. It will show you SCCM in action in controlled set up environment and will explain some of product’s complexities.

Special note for those who wants to study SCCM 2012 – there is no difference between two when you are training. Course on SCCM 2007 perfectly fits and will explain most about 2012 functionality.

Additionally, you may look at TechNet Labs for interactive labs.

As additional material I would recommend a book from Unleashed series which is among the best of a kind.

SCCM 2007 Lab 1 / 13

Team Foundation Server 2010 Quality Assurance Lab 6 - Coded UI Tests

by Dmitry Kirsanov 8. November 2011 14:56

Seems like my labs are evolving as I am adding a few comments when needed and added a sound track in order to help focusing on what’s happening on the screen.

During years of training I found that ambient avant-garde  music greatly help in keeping focus on the subject, even when added as additional background to narrated training. I didn’t add Biosphere, though, as their tracks are rather short, although beautiful. Enjoy!

Coded UI Tests–Visual Studio 2010 Test Manager Lab 6

Previous lab can be seen here.

Security through obscurity

by Dmitry Kirsanov 8. November 2011 11:46

Rather short note for pen-testers.

Sometimes you have software which is contacting some web services – especially interesting when it’s about transferring files.

Sometimes some software packages, especially custom ones, made for a small number of customers, may have web services open for consuming by that software.

Pay attention to it. Sometimes there are exposed functions which could be exploited in a way that developers were not able to imagine.

For example, during my most recent pen-test, I was able to put files, delete and execute on server using only functions of exposed web service. Needless to say, I wouldn’t need any hacking tools or social engineering to penetrate networks of their customers as well.

This topic is rather omitted in CEH and similar courses, but with some base knowledge of programming you could kill the whole family of rabbits with one shot.

Also, as a side note about pen-testing. I noticed that even when you’re using simplest technique, a “no-brainer” one, customer will call you “hacker” or “genius” just to not call their developer or system administrator an idiot.

See the forest behind the trees

by Dmitry Kirsanov 8. November 2011 11:29

Today I was walking by the city and suddenly seen the car of one of our local IT companies. The motto on the side of the car said – “we see further”. Yeah, right.

For years it was a dream of each and every CEO to look one step further than others. To be what they call “visionary” or even “strategist”. To keep the hand on the pulse of technology, you know. To use possibilities before others react.

However, funny thing is that most of them don’t see the forest behind the trees. They fail not only to predict, which is more or less ok, as sales guy is not necessarily an analyst. They fail to see the trend in their own niche, living processes inside their own organization. So what you can read in LinkedIn and similar resources is mostly chewing out the same “enlightening” gum .

The biggest and most consumed chewing gum these days is the cloud. Cloud computing that is. Without understanding of what cloud is, usually CEOs think about the same features of it:

  • No more server room, we can place everything in the cloud, so this will save us money.
  • All of our clients will use our solution which is placed in the cloud, so we won’t funk up with servers and this will save us money.
  • We will save money on IT staff – less nerds in staff is always good.

Et cetera.

Recently I met a solution plan which was designed with pink glasses of SAAS (Software As A Service). That is a currently successful corporate application which is about to “go cloud” so all customers will use one web site and won’t need to install the application locally. The (rather hidden) problem is – this application will require administrative privileges on customer’s Active Directory, which means – all computers of the company. And all customers will use the same instance of that application. And there are nuclear power plant operators among the customers.

I would say – “one ring to rule them all”, but you remember the story, right?

Corporate PR specialists run into social networks without insight. They don’t understand the consequences, they are just playing poker. They don’t understand, for instance, that what they are doing is less effective than using a computer program to do the same thing. And when they are starting to use that program, they themselves become useless, as creativity (the only genuine thing that computers don’t have, but can imitate) can be borrowed through outsourcing or simply dismissed.

The same is with HR and some other specialties – it becomes more automated, then it will become a “cloud” application and then it will become part of someone else’s responsibilities to operate that application. Which will always be more effective than most human specialists.

These days, creativity, speed and precision alone are not enough. You need the knowledge, which is always neglected and seems like always will be. ‘Enlightened CEO’ was the core of the dot-com bubble problem and is the same with any technology-related  hype. Because technology is based on knowledge and decision-makers just lack it.

Look at the top players in IT business. The most successful ones are the ones founded and led by scientists, not by entrepreneurs. Talking about software companies, Apple and Google were found by scientists. Microsoft as well. When CEOs were not scientists, like in Google, they didn’t make any technological decisions, like what their product will look like and how it will work.

However, most other IT companies are led by entrepreneurs, sometimes with insignificant experience in IT, who make key decisions. And fail.

So, the morale of the story. You can’t just use someone else’s knowledge and experience, mainly because you won’t have complete access to it, but only to public portion of it. You must have your own. And prove to yourself that you have it.

 

The devil, as you know, is in details. There was a  time when you could just copy what others did and chances are – you would be fine (remember IBM PC?). These days, with the cloud and SAAS and other buzz terms that may come to your mind, the frontier is much wider and you should be a great analyst in order to understand why someone else’s solution works this way with such success – because there are many details which are hidden from view, hiding somewhere in the cloud and won’t apply to your case.

Think what you’re doing, don’t look at others.

Password policy of our time?

by Dmitry Kirsanov 3. November 2011 22:18

PasswordWhen I began studying computers in beginning of 90s, I adopted the password policy of that time, which stated that passwords should be at least 8 symbols long and be complex, meaning that there shall be a number, uppercase and lower case symbols, and would be nice if there would also be a special character.

With Windows NT 4 we had addition to that rule, which was rarely used in practice, that the password should be longer than 14 symbols, as otherwise it could be hacked in a matter of seconds.

Windows has additional rules in corporate environment, but all of them are basically about the length, complexity and maximal age of the password. However, while you can enforce that in corporate network, most people are far from understanding the underlying idea of password policy, can’t estimate the cost of weak password, and overall they are ready to adopt the policy only if it will be reasonable enough.

So I decided to create such policy for myself, and take a look what I came up with:

More...

Introduction to Corporate Computing

by Dmitry Kirsanov 3. November 2011 17:31

Imagine, that you have a company with 5 000 employees having 6 000 computers. These could be desktop computers, notebooks, various mobile devices – anything running Windows. And you have 10 people to manage all this hardware and software.

When working in such strict conditions you can’t avoid standardizing everything you can. And having such tiny staff with ratio of 600 machines per IT staff member, you want to automate everything and make the environment to be more reliable and independent from system administrators.

Imagine the situation, when you need to install software to all machines, or perhaps to ½ of machines, which is 3 000 computers anyway. With help of Active Directory you can do it automatically, if the source application contains MSI (Microsoft Installer) file. If it doesn’t, you can execute legacy EXE installation and install it using Microsoft System Center Configuration Manager (SCCM) which is version 2007 at the time this post is written, but we already have version 2012 in RC (release candidate) phase.

However, in both cases you might need to change configuration of installation significantly. Remove automatic updates, icons from desktop, shortcuts in Startup and various screens to welcome new users. You may also want to make it impossible to change the installation but keep the Recovery and Uninstall options. Or perform initial configuration, such as configure your program to use DivX drivers or whatever else you can do once the program is installed.

But how?

That’s what repackaging stands for. You can take anything you want and package it into MSI installation file. That’s packaging. When you take existing MSI package or legacy EXE/BAT/whatever package, and transform it to MSI package, that’s called repackaging.

And it’s quite profitable business.

The reason for it to be a profitable business is mainly because you need to be an expert in systems administration and preferably also in hardware and software development in order to successfully repackage the whole software portfolio your 6K-computers company needs.

And most likely you don’t have such specialist or have better tasks for him, right? So you outsource that business to repackaging company and agree to pay per conversion or per day of work, depending from the volume of work required.

Repackaging these days include not only the conversion, but also testing and analysis of your software. For example, you may submit software you were using for years to repackager, and he will test whether his package and the software itself will work in required target operating system, like Windows 7 x64.

If not – then he will recommend the course of actions to make it work, and there is a correlation between his level and “that’s impossible” answer ratio, as more experienced and skilled repackagers tend to solve problem instead of giving up early.

So, let’s return to our company. Once you’ve got your legacy software repackaged into stable and shiny MSI package, you install it wherever you need using SCCM server. SCCM will make sure that older packages are updated with this one, but it won’t track your licenses for it, if any. So as you can see, there is a whole lot of new concepts for a standard systems administrator to uncover.

If you are installing things like Microsoft Office or Adobe Acrobat Pro in your company, the chances are – you need to make sure you don’t install more copies than you paid for. And you want to track how many of them you have left, who needs them and perhaps allow those who needs to install necessary software without you doing much about it. Remember, with ratio of 600 machines per IT administrator, you only have 48 seconds per day for each workstation.

So there are tools that track licenses, allow people to acquire licenses from the pool and automatically install required software once approved by supervisor, or vice versa – remove software from one machine and distribute it to others.

There are even scenarios, when user visits a homepage in local network, requests new computer pre-loaded with required software, and once the request is authorized by his supervisor, receives new computer. But what is important – the computer comes to your company with blank hard drive, and all you need to know as the system administrator – the MAC address of that new computer and the recipient.

You enter the address into your system, power on the machine and forget about it for next half of hour. In 30 minutes it is ready to work, totally loaded with all required software. Then you switch off the old workstation, switch on the new one, user logs in and can continue working right over.

I deliberately don’t name the software packages that make this happen, so it would be easier to understand, that all of them are working on top of the main layer – the Microsoft System Center Configuration Manager, which I am going to talk about soon.

Anyway, as you can see, a single need to operate as much computers as you can with as smaller IT staff as possible, led to whole new sector in IT market and highly sophisticated products, which you should learn if you are about to pursue a career in large organization as Windows Server systems administrator.

What could be worse than that?

by Dmitry Kirsanov 2. November 2011 21:59

Have you ever think about all the possible things that could happen when you become subject of business espionage through hacking of your server? Either of the whole farm or one and only server you have in your organization? What are the possible scenarios you went through in your fantasies or security planning?

Here is one idea you didn’t go through. Imagine, that your server is hosting installation files for software which is used either on other computers inside of your organization or outside of it. Even funnier – you have part of your network which is separated from the  Internet but still it uses piece of software, whose installation files are stored on compromised machine.

Using technique called repackaging, intruder could change these installation files so you wouldn’t distinguish them from the original ones. They would look and behave identically but would also install Trojan horse. In case of targeted attack this Trojan horse wouldn’t be recognized by antivirus software, as it couldn’t be found on other machines in the Internet.

While very sophisticated, this attack is also very simple to implement and potentially could supply attacker with precious information for years.

I am not aware of any attempts of this kind were implemented ever, so probably could patent it. Too bad, hacking techniques wont be patented. But anyway, we are going to talk about “white hat” repackaging pretty soon, so stay tuned!

Team Foundation Server 2010 Test Manager Lab 5 - Web Load Testing

by Dmitry Kirsanov 1. November 2011 11:39

Next part in a series of Team Foundation Server 2010 Labs. Previous is available here (Lab 4, Test Runs).

This time it’s about Web Load Testing. When you are developing ASP.NET web application, it’s paramount to make sure your application is able to handle the required amount of requests, or at least you need to know the exact cap of your application for scalability planning. If your IT infrastructure is working according to ITIL / ITSM, you need to know what you need to scale out your application and how to do it right.

We are going to find the bottleneck in our application and refactor it when needed, so sudden success of our website will not mean eminent failure (see Slashdot Effect).

This Lab is longest so far (1h 17m) and contains materials which are valuable even if you don’t have TFS installed. You may perform Web Load tests using Visual Studio 2010 Ultimate and virtual machine with IIS installed, but that’s a good topic for another article.

And now – enjoy and don’t forget to watch it in full screen HD!

Team Foundation Server 2010 Test Manager Lab 5 - Web Load Testing

Team Foundation Server 2010 Test Manager Lab 4 - Test Runs

by Dmitry Kirsanov 31. October 2011 21:01

So far we’ve seen a lot of unusual and amazing things in Team Foundation Server 2010, more specifically – in Visual Studio 2010 Test Professional. However, one of the most ground-breaking features of TFS and Visual Studio Test Professional is it’s ability to run automated tests.

By automation we understand performing complex tasks and verifying results of users’ interaction with your application. As you can see in this example.

Enjoy the 4th lab of Team Foundation Server 2010’s Test Manager, and now it’s about Test Runs.

Team Foundation Server 2010 Test Manager Lab 4 - Test Runs

As always, make sure you watch it in Full Screen HD!

Previous lab is available here.