Deploying Windows 7 by Using Windows Deployment Services

by Dmitry Kirsanov 14. November 2011 09:10

In one of the previous topics, I was showing how  to deploy Windows 7 using Automated Installation Kit, or AIK. This time it’s more hardcore, and is about Windows Deployment Services.

Using Windows Deployment Services you can deploy your fine tuned Windows 7 image to hundreds of computers in a matter minutes. It’s not the most hardcore way of doing it, as I will show in SCCM course later, but still mind blowing if you are either haven’t heard about it before.

It’s not a rocket science though, so the whole lab is just 16 minutes. Very short indeed, but it contains even the installation phase.

Enjoy!

Deploying Windows 7 by Using Windows Deployment Services

Team Foundation Server Quality Assurance Lab 8 - Miscellaneous Testing

by Dmitry Kirsanov 14. November 2011 09:02

The last lab on Team Foundation Server Quality Assurance course, this time – about miscellaneous testing techniques you can use.

In this lab you can learn how to create ordered testing, use the test list editor and create exploratory tests.

After completing filming this lab I missed the topic so much, that decided to create another one, now just for software developers (this one was for Software Quality Assurance specialists). It will be about how to use TFS on daily basis and it will be narrated for sure!

TFS Team Foundation Server Quality Assurance Lab 8 - Miscellaneous Testing

Enjoy and don’t forget to switch to HD. This lab is rather old and probably quality suffers a little, new one, that you can see in the previous post, is rather better.

Network Load Balancing Clusters in Windows 2008 - when one server is not enough

by Dmitry Kirsanov 14. November 2011 08:52

Russians says – “One is not a warrior at the battlefield”, meaning that one is just too small number for a real war. When the time of real battle is coming to your web site, it’s time to become a … farmer. The geeky one.

Web farm, or Network Load Balancing (NLB) cluster is when there are more than one web server behind a single web address. Of course, it’s not only about web – some other stateless resources can be scaled that way as well – DNS server, for example, or SMTP. However, the most popular use of NLB clusters is web, as most requests in the Internet are going through it.

Network load balancing clusters are rather frustrating topic for many systems administrators, as it’s very common for them to know clustering till the time of their exam. MCSEs and MCITPs of all kinds have to know that stuff, but rarely use it. Who might be more interested in clustering – that’s web developers, who’s web applications serve more and more visitors each day and should be infinitely scalable by design.

Developers

But what it takes to build an application, which could be scaled out by simply adding more hardware? If your application is working fine and attracts more customers than it can handle – that’s when you are wondering whether you’re in trouble. The trouble comes when you realize, that the architecture of your project does not support scaling and situation is even worse if your web developer has no clue about how to make it work in the cluster.

For .NET developers, though, the situation is much better than for PHP developers, for example. They can use SQL server to store the state data (and SQL server may reside on failover cluster, which is the second type of windows cluster that we’ll review later), files can be stored at mirrored network area storage (NAS) and that’s it.

Systems Administrators

For administrators, though, the situation is more difficult. First of all, they are the ones who needs to care about installation, maintenance and management of the cluster. They are the ones who migrate old applications to new clustered servers and must ensure they understand why these applications do not work under new conditions. While usually developers have the harder burden, this time it’s not the case, thanks to Visual Studio and .NET.

There is not much we can say about developer’s part of the job, as there is too little and too simple stuff to do, there is pretty much to say and to show to system administrators.

That’s why I had no choice than to prepare my first narrated lab about creating Windows Server 2008 Network Load Balancing cluster. Enjoy!

Implementing Network Load Balancing Clusters in Windows 2008

Perhaps my future labs will become narrated as well, excerpt for the short and simple ones. It takes a bit more work, since I am not preparing the text and have limited time to complete the lab (always do it in one take), but is definitely more fun, as I can tell more than you want / need to know about the subject.

Deploying Windows 7 by Using Windows AIK

by Dmitry Kirsanov 9. November 2011 19:20

Another aspect of corporate systems administration is ability to deploy anything and everything at once without even leaving your chair. In Windows world, we had that ability from Windows 2000 and it evolves with every new version of operating system.

One of the key tools to install the operating system itself is Windows Automated Installation Kit (AIK). Windows is using so called “answer file” to not ask you for things with known answers. And it’s not only serial number, user and computer name, but also partitions, drivers and other things that could take hours to install and configure otherwise.

As a potential scenario of deployment, imagine that you’ve just received a 100 new computers from hardware vendor. 100 brand new machines with no operating system installed, as you will use Windows 7 corporate – version which you can’t buy at local store. Your task is to install it as soon as possible – it’s Friday evening and you don’t want to waste your weekend on it.

So you prepare the image of one machine and deploy it on all other machines using local network. Very simple thing to do when you know what you are doing.

The following walkthrough lab is from the Microsoft Official Curriculum 6294A: Planning and Managing Windows 7 Desktop Deployments and Environments. It shows you how to create bootable media with image of your reference workstation and deploy it on other machines. Enjoy!

Deploying Windows 7 by Using Windows AIK

Microsoft System Center Configuration Manager 2007 LAB 1 / 13

by Dmitry Kirsanov 8. November 2011 23:13

One of the most sophisticated and complicated products made by Microsoft, System Center Configuration Manager (SCCM) is absolutely irreplaceable thing for corporate systems administration. It requires a lot of resources, a lot of knowledge and a lot of care to install and manage, and theory alone is insufficient to become proficient with it.

So here it goes – the first lab in a series. It will show you SCCM in action in controlled set up environment and will explain some of product’s complexities.

Special note for those who wants to study SCCM 2012 – there is no difference between two when you are training. Course on SCCM 2007 perfectly fits and will explain most about 2012 functionality.

Additionally, you may look at TechNet Labs for interactive labs.

As additional material I would recommend a book from Unleashed series which is among the best of a kind.

SCCM 2007 Lab 1 / 13

Team Foundation Server 2010 Quality Assurance Lab 6 - Coded UI Tests

by Dmitry Kirsanov 8. November 2011 22:56

Seems like my labs are evolving as I am adding a few comments when needed and added a sound track in order to help focusing on what’s happening on the screen.

During years of training I found that ambient avant-garde  music greatly help in keeping focus on the subject, even when added as additional background to narrated training. I didn’t add Biosphere, though, as their tracks are rather short, although beautiful. Enjoy!

Coded UI Tests–Visual Studio 2010 Test Manager Lab 6

Previous lab can be seen here.

Security through obscurity

by Dmitry Kirsanov 8. November 2011 19:46

Rather short note for pen-testers.

Sometimes you have software which is contacting some web services – especially interesting when it’s about transferring files.

Sometimes some software packages, especially custom ones, made for a small number of customers, may have web services open for consuming by that software.

Pay attention to it. Sometimes there are exposed functions which could be exploited in a way that developers were not able to imagine.

For example, during my most recent pen-test, I was able to put files, delete and execute on server using only functions of exposed web service. Needless to say, I wouldn’t need any hacking tools or social engineering to penetrate networks of their customers as well.

This topic is rather omitted in CEH and similar courses, but with some base knowledge of programming you could kill the whole family of rabbits with one shot.

Also, as a side note about pen-testing. I noticed that even when you’re using simplest technique, a “no-brainer” one, customer will call you “hacker” or “genius” just to not call their developer or system administrator an idiot.

See the forest behind the trees

by Dmitry Kirsanov 8. November 2011 19:29

Today I was walking by the city and suddenly seen the car of one of our local IT companies. The motto on the side of the car said – “we see further”. Yeah, right.

For years it was a dream of each and every CEO to look one step further than others. To be what they call “visionary” or even “strategist”. To keep the hand on the pulse of technology, you know. To use possibilities before others react.

However, funny thing is that most of them don’t see the forest behind the trees. They fail not only to predict, which is more or less ok, as sales guy is not necessarily an analyst. They fail to see the trend in their own niche, living processes inside their own organization. So what you can read in LinkedIn and similar resources is mostly chewing out the same “enlightening” gum .

The biggest and most consumed chewing gum these days is the cloud. Cloud computing that is. Without understanding of what cloud is, usually CEOs think about the same features of it:

  • No more server room, we can place everything in the cloud, so this will save us money.
  • All of our clients will use our solution which is placed in the cloud, so we won’t funk up with servers and this will save us money.
  • We will save money on IT staff – less nerds in staff is always good.

Et cetera.

Recently I met a solution plan which was designed with pink glasses of SAAS (Software As A Service). That is a currently successful corporate application which is about to “go cloud” so all customers will use one web site and won’t need to install the application locally. The (rather hidden) problem is – this application will require administrative privileges on customer’s Active Directory, which means – all computers of the company. And all customers will use the same instance of that application. And there are nuclear power plant operators among the customers.

I would say – “one ring to rule them all”, but you remember the story, right?

Corporate PR specialists run into social networks without insight. They don’t understand the consequences, they are just playing poker. They don’t understand, for instance, that what they are doing is less effective than using a computer program to do the same thing. And when they are starting to use that program, they themselves become useless, as creativity (the only genuine thing that computers don’t have, but can imitate) can be borrowed through outsourcing or simply dismissed.

The same is with HR and some other specialties – it becomes more automated, then it will become a “cloud” application and then it will become part of someone else’s responsibilities to operate that application. Which will always be more effective than most human specialists.

These days, creativity, speed and precision alone are not enough. You need the knowledge, which is always neglected and seems like always will be. ‘Enlightened CEO’ was the core of the dot-com bubble problem and is the same with any technology-related  hype. Because technology is based on knowledge and decision-makers just lack it.

Look at the top players in IT business. The most successful ones are the ones founded and led by scientists, not by entrepreneurs. Talking about software companies, Apple and Google were found by scientists. Microsoft as well. When CEOs were not scientists, like in Google, they didn’t make any technological decisions, like what their product will look like and how it will work.

However, most other IT companies are led by entrepreneurs, sometimes with insignificant experience in IT, who make key decisions. And fail.

So, the morale of the story. You can’t just use someone else’s knowledge and experience, mainly because you won’t have complete access to it, but only to public portion of it. You must have your own. And prove to yourself that you have it.

 

The devil, as you know, is in details. There was a  time when you could just copy what others did and chances are – you would be fine (remember IBM PC?). These days, with the cloud and SAAS and other buzz terms that may come to your mind, the frontier is much wider and you should be a great analyst in order to understand why someone else’s solution works this way with such success – because there are many details which are hidden from view, hiding somewhere in the cloud and won’t apply to your case.

Think what you’re doing, don’t look at others.

Password policy of our time?

by Dmitry Kirsanov 4. November 2011 06:18

PasswordWhen I began studying computers in beginning of 90s, I adopted the password policy of that time, which stated that passwords should be at least 8 symbols long and be complex, meaning that there shall be a number, uppercase and lower case symbols, and would be nice if there would also be a special character.

With Windows NT 4 we had addition to that rule, which was rarely used in practice, that the password should be longer than 14 symbols, as otherwise it could be hacked in a matter of seconds.

Windows has additional rules in corporate environment, but all of them are basically about the length, complexity and maximal age of the password. However, while you can enforce that in corporate network, most people are far from understanding the underlying idea of password policy, can’t estimate the cost of weak password, and overall they are ready to adopt the policy only if it will be reasonable enough.

So I decided to create such policy for myself, and take a look what I came up with:

More...

Introduction to Corporate Computing

by Dmitry Kirsanov 4. November 2011 01:31

Imagine, that you have a company with 5 000 employees having 6 000 computers. These could be desktop computers, notebooks, various mobile devices – anything running Windows. And you have 10 people to manage all this hardware and software.

When working in such strict conditions you can’t avoid standardizing everything you can. And having such tiny staff with ratio of 600 machines per IT staff member, you want to automate everything and make the environment to be more reliable and independent from system administrators.

Imagine the situation, when you need to install software to all machines, or perhaps to ½ of machines, which is 3 000 computers anyway. With help of Active Directory you can do it automatically, if the source application contains MSI (Microsoft Installer) file. If it doesn’t, you can execute legacy EXE installation and install it using Microsoft System Center Configuration Manager (SCCM) which is version 2007 at the time this post is written, but we already have version 2012 in RC (release candidate) phase.

However, in both cases you might need to change configuration of installation significantly. Remove automatic updates, icons from desktop, shortcuts in Startup and various screens to welcome new users. You may also want to make it impossible to change the installation but keep the Recovery and Uninstall options. Or perform initial configuration, such as configure your program to use DivX drivers or whatever else you can do once the program is installed.

But how?

That’s what repackaging stands for. You can take anything you want and package it into MSI installation file. That’s packaging. When you take existing MSI package or legacy EXE/BAT/whatever package, and transform it to MSI package, that’s called repackaging.

And it’s quite profitable business.

The reason for it to be a profitable business is mainly because you need to be an expert in systems administration and preferably also in hardware and software development in order to successfully repackage the whole software portfolio your 6K-computers company needs.

And most likely you don’t have such specialist or have better tasks for him, right? So you outsource that business to repackaging company and agree to pay per conversion or per day of work, depending from the volume of work required.

Repackaging these days include not only the conversion, but also testing and analysis of your software. For example, you may submit software you were using for years to repackager, and he will test whether his package and the software itself will work in required target operating system, like Windows 7 x64.

If not – then he will recommend the course of actions to make it work, and there is a correlation between his level and “that’s impossible” answer ratio, as more experienced and skilled repackagers tend to solve problem instead of giving up early.

So, let’s return to our company. Once you’ve got your legacy software repackaged into stable and shiny MSI package, you install it wherever you need using SCCM server. SCCM will make sure that older packages are updated with this one, but it won’t track your licenses for it, if any. So as you can see, there is a whole lot of new concepts for a standard systems administrator to uncover.

If you are installing things like Microsoft Office or Adobe Acrobat Pro in your company, the chances are – you need to make sure you don’t install more copies than you paid for. And you want to track how many of them you have left, who needs them and perhaps allow those who needs to install necessary software without you doing much about it. Remember, with ratio of 600 machines per IT administrator, you only have 48 seconds per day for each workstation.

So there are tools that track licenses, allow people to acquire licenses from the pool and automatically install required software once approved by supervisor, or vice versa – remove software from one machine and distribute it to others.

There are even scenarios, when user visits a homepage in local network, requests new computer pre-loaded with required software, and once the request is authorized by his supervisor, receives new computer. But what is important – the computer comes to your company with blank hard drive, and all you need to know as the system administrator – the MAC address of that new computer and the recipient.

You enter the address into your system, power on the machine and forget about it for next half of hour. In 30 minutes it is ready to work, totally loaded with all required software. Then you switch off the old workstation, switch on the new one, user logs in and can continue working right over.

I deliberately don’t name the software packages that make this happen, so it would be easier to understand, that all of them are working on top of the main layer – the Microsoft System Center Configuration Manager, which I am going to talk about soon.

Anyway, as you can see, a single need to operate as much computers as you can with as smaller IT staff as possible, led to whole new sector in IT market and highly sophisticated products, which you should learn if you are about to pursue a career in large organization as Windows Server systems administrator.


Month List