How to Create Internet Site with Windows NT only

WinNT Mag HotLink Windows NT Links NTxtras Xcellent Site Copyright © John Neystadt, 1995-96.


My special thanks to James S. Kay, who took on him the hard work of spelling and style rewriting of this document. Inspite of his claim that English (my third langauge) is much better than his Russian and Hebrew, this document would not look like it does without him.

Also I want to thank all people in NTMail-Discuss and Web-ServerNT mailing lists that passively donated by their questions and answers to the content of this document.

In order to create a convinient way for discussion Windows NT as Internet platform I run Windows NT in Internet Environment mailing list. This list is dedicated strictly to administering Windows NT in TCP/IP networks.


One of the major problems of maintaining an Internet presence for any site is the complexity of administering the various Services. For this, Windows NT has advantages over UNIX that very few administrators are currently exploiting.

Well what makes up a site?

Mastering each one of components is different job and has a numerious variants depending on your needs, budget, existing hardware and software.

Local Network or One Computer Site

The most basic site consists of just one computer but usually you will have multiple computers connected together by a LAN. Every computer should have the TCP/IP stack installed.

Windows NT comes with a built-in TCP/IP stack that functions very well. However there are alternative TCP/IP stacks on the market. Windows for Workgroups and Windows 95 also have Microsoft TCP/IP stacks for them.

NT has a unique ability to backbone (operate natively) with multiple protocols including TCP/IP compared to Novell NetWare that can run only IPX. That means that you can have TCP/IP as the only protocol in the network if you don't have to connect to Novell Servers. Another alternative is to use IPX or NetBEUI internally to your LAN. Both methods have their pros and cons. Using IPX for internal communications is safer - you open fewer points for break-ins from the Internet. But the major con is that each stack uses a lot of memory in each workstation and also degrades performance. The alternative is to use TCP/IP as the only protocol on all stations in the network. This option is also the best because every it uses less memory. TCP/IP is a very robust protocol. Microsoft put a lot of effort into their implementation, because they are using TCP/IP on their internal networks.

Using IPX or NetBEUI to backbone

The main advantage of this approach is to protect your internal network from the outside world. If you choose this approach, disable native networking over TCP/IP in Control Panel\Network\Binding in order to prevent intruders from entering your network.

Securing the network with TCP/IP protocol

If you choose to run TCP/IP on computers connected to the Internet you should be very careful about outside users being able to connect to your computers. If a hacker can discover a valid user name and password they will be able to access drives on your computers run programs and even perform administrative functions.

Connecting your Site to Global Internet

You have many options to connect your site to the Internet depending only on your budget. The basic connection uses RAS (Remote Access Service) that is included with Windows NT. You can connect and establish routing between your network and the network of your Internet Provider. Of course you will need an appropriate account and setup at your Internet provider, but this differs considerably among providers.

One point that a lot people neglect is tuning the SWITCH.INF file that contains scripts for automatic dialing. Don't give up. I've managed to make it work you can too. Ultimately it was written only by Microsoft Programmers. Think what could it happen if Gates would incorporate his famous BASIC engine inside.

This link contains an NT Internet FAQ. It's a little bit outdated but will help you to setup the Windows NT RAS link.

You will not be able to connect your entire network to the Internet unless your ISP (Internet Service Provider) has configured this function on his side. A lot of people ask what they are doing wrong and it turns out to be at their ISP.

If you want to connect at higher speeds or have more control over your connection you will need a dedicated router such as the ones made by Cisco Systems. You can also benefit from a router in other ways by using it for your internal network. The simplest CISCO router costs about $2000 and is a relatively expensive solution.

A router will give you a lot of options for packet filtering (security), line load control (backup dial lines in case of failures) and few other features that will contribute to the reliability of your connection. The most important thing is the packet filtering. By disabling all traffic on 138 UDP and 139 TCP ports you will prevent all NetBIOS traffic between your net and the Internet effectively isolating your site from intruders.

Router also maintains tables that allows you to build more sophisticated setups than a single gateway to a single Internet provider.

There are FrameRelay Cards available that allow you to connect a wide serial line directly to a computer, however the lack of full control over the routing process makes this solution less than the best. The main benefit is that this can be much cheaper than a hardware router. Another benefit of a hardware router is that routers are generally much more difficult to break in to than computers.

Microsoft released an add-on for NT called Multi-Protocol Router that allows you to build and maintain routing tables at NT hosts, it supports both RIP protocol and IPX routing. However it does not allow packet filtering.

A comparison chart among different routing solutions by power/customization level, speed, security and price:

Power/Customization Speed Price Security
Modem with RAS None Slow Cheap Low/Less Important (1)
Hardware Router High Fast Expensive High (2)
FrameRelay Card None Fast Medium Low

(1) Usually networks using slow modem links are not popular sites on the Internet and do not contains a large volume of resources. But, those networks are less exposed to the public, however, which reduces the security risk.

(2) Routers must be configured properly in order to function as security guards.

Running DHCP (Dynamic Host Configuration Protocol) Server

One of the recent additions to TCP/IP protocols suite is DHCP protocol, that was developed by Microsoft in conjunction with other TCP/IP vendors. This protocol solves one of the oldest and the comlicatest problems in TCP/IP network administration.

As you know group of subnetted TCP/IP networks share a number of parameters that must be the same across all hosts on the LAN to enable proper communications. Each host should at least have:

Together with these required settings there are a number of TCP/IP and NetBIOS specific parameters that could be changed such as MTU, TCP frame parameters and NetBIOS node types.

In the case where you administer a network with tens or even hundreds of hosts, each of them should be configured consistently in order to allow even basic communication with other hosts. For this reason the traditional nightmare of every network administrator has been the need to change something like the network number, what generally can happen in one of these cases : you change to a new location or ISP, your ISP acquires a new CIDR of addresses, or your network outgrows the original address range.

So it is clear that everyone would benefit from centralizing the registration of these settings and the allocation IP addresses. The Internet has a number of protocols for this purpose, such as BOOTP, but only DHCP really eases this task.

The administration of a DHCP Server is very simple - just install a DHCP server on one of the NT computers on your network. Specify global settings that will be shared among all the hosts. Define the IP address allocation algorithm and then all you need to do on the host computers is to tell them to configure themselves from the DHCP servers. The simplicity of DHCP is amazing. I needed to change the address of a network with about 30 Windows NT and Windows for Workgroups computers. It took me about 4 minutes to tell everyone to shut their computers down, to change the network number on the server, and to restart the server and tell every one to turn their computers on.

Nobody believed that this was possible. Furthermore, when installing new routers or changing DNS scenarios I just changed DHCP settings and all the hosts reconfigured themselves totally on their own.

You may say that all this is very nice but what about computers that require fixed IP addresses? Various servers - DNS, SMTP Relays, and even regular computer that you want to have fixed names for. There is no dynamic name resolution, DNS cannot help, (this has been discussed for a long time, but without a solution yet), and computers such as Name Servers require fixed addresses anyway. The solution for this is "reservations". You preallocate on the DHCP Server, IP addresses for specific computers based on the serial number of their MAC interface (i.e. Ethernet number). This allows you to define some or all of your computers with fixed IP addresses. The configuration becomes less flexible, and requires more time to reconfigure, but it is still better than defining IP addresses and configurations on each host - all addresses are configured and maintained on a single computer.

I recommend that you read more about DHCP servers in the Windows NT Resource Kit books.

Running Domain Name Server - DNS

Domain Name Server or DNS is part of distributed data base that serves 2 purposes of which usualy only first is discussed:
Readable names
DNS allows people to use human readable names such as instead of their IP addresses such as
Abstraction of names from Addresses
DNS allows you wo move computers and sites between service providers, offices and even countries without anybody from outside even noticing this. As part of name abstraction e-mail addresses are distinguished from the specific mail servers, which allows you to maintain your e-mail addresses for long periods of time, independent of physical networks and software.
Almost the only implementation of DNS you will find being used is the BIND program. BIND is free and is maintained by volunteers on the Internet. BIND was ported to NT by two commercial companies FBLI and MetaInfo. The freeware port of BIND to NT was the port of BIND-4.9.3-BETA24 provided by Viraj Bais on behalf on Intel Corp. This port is now maintained by Larry Kahn and Greg Schueman and made freely available.

If you are interested in getting a copy of free BIND port to NT email for info on getting onto an ftp site to download the files (source) or (binaries only). Note: these zip files contain long file names and you will need a unzip that supports them (i.e. info-zip, or download the unzip.exe) on the same ftp site.

Here are links to 2 documents that should answer all you specific BIND questions.

Look also at Randall Golhov's guide to NT DNS Configuration which is good example of how to do it.

There are alternative implementations of DNS the system besides BIND, but they usually either do not conform to all standards or give you only a subset of BIND's functionality. Microsoft is working on a DNS implementation for NT that will incorporate dynamic (WINS) address resolution and other NT specific stuff, but it's in beta already for almost a year and it seems they have put it on hold. You can try downloading the beta, but speaking as one who has checked it, I would not recommend using it due to its instability and protocol non-compliance.

The Problem of Dynamic IPs and DNS

The major problem with DNS for Windows networks is not DNS itself - DNS could be run either on NT, or a UNIX box, or even at your ISP. The problem is dynamic DHCP addresses. If you choose to use DHCP for dynamic IP address allocation, which is the preferred solution, than you will not be able to define DNS names for your computers, since DNS requires fixed addresses.

If you use only NetBIOS Networking - file operations or Client/Server applications written with NetBIOS in mind, you will never face the problem with name resolution of dynamic addresses. However using TCP/IP aware programs such as Netscape, ORACLE SQLNet (with TCP/IP transport), NFS for NT or even simple TCP/IP utilities such as TRACERT (traceroute in UNIX) you do need DNS and this becomes a problem. Microsoft is working on a special version of DNS for NT that would support dynamic resolution through WINS, they are saying that it will be part of NT 4.0, but there is no complete solution by now.

The only thing you can do is have your important computers use fixed IP addresses with DHCP reservation. Then you can register these computers in the DNS. This is neither ideal nor a convenient solution, but this is the only thing you can do for now.

Running Remote Access Server - RAS

Remote Access or Dial-In server is needed if you want to connect to your network from remote location. The basic Internet Connection is dialing into Dial-In Server with modem and connection via either PPP or SLIP to ISP's network.

Traditionaly Internet Service Providers have been using Terminal Servers for this aim. For example one model of CISCO routers can be used for this aim. Another company on this market is Shiva that associated with Remote Access. As other companies Microsoft needed in dial-in for their workers, Currently at their headquaters they have dial-in with more than 50 lines. Instead using standard existing terminal server Microsoft implemented one for NT. This package called Remote Access and comes with every NT Server and Workstation. It contains 2 parts. One already mentioned is used for dial out and other, called Remote Access Server serves for Dial-In. The Version that comes with Workstation is limited for one dial-in port and Server's allows up to 255 dial-in ports on one station. Regular PC board will not allow you to have mor that 4 COM ports, so with out multi-port card you will not be able to create server with larger number of lines. One of manufactures making multi-port cards is Digi International. The DigiBoard cards work excellent with Windows NT.

Few months ago PC Magazine Labs tested and elected the NT Remote Access as the best tool on the market in this area. NT Remote Access Server supports all standard protocols including TCP/IP, IPX and NetBEUI over PPP, and all standard authentication protocols, including PAP, SPAP and CHAP. Remote Access Server allows to use both dynamic and static IP addresses and can work either with DHCP server or with static pool of addresses. Note however that if you will not use DHCP Server, clients will not be able to acquire automatically other TCP/IP parameters like DNS and WINS addresses. Clients dialing can be either allowed to access the whole network and via it have access to Internet or limited to the Server itself. One of the most handy features of RAS is it callback feature. RAS has option either to use the user initiated connection to to call client back or by client given phone number or by one preconfigured by administrator. The last is excellent security solution. Server that only connects to clients by calling them at predefined phone number is virtually unbreakable.

RAS logs all connects and disconnects together with session statistics about in the Event Log, what could be used latter by Service Providers to calculate bills. Internet-Shopper privides a libray of RAS Software for Windows NT. All this makes Remote Access Server excellent solution for in house dial-in. It seems however that RAS still lacks few features in order to be used by ISP's. For exampe there is no way to prevent multiple login's of the same user. Also RAS connections can't trigger route updates what is important if you wish to connect LANs via RAS.

Running Mail Server and SMTP Relays

I believe that a majority of the audience need some introduction there, so you SMTP and DNS specialists, please skip to the second part of this chapter.

E-Mail in a nutshell

Some people say that 50% of Internet transactions are mail.

There are a lot of different mail scenarios that could be created. Which one you want to use depends on the number of your users and offices, on connectivity of your site to the Internet and a lot of other factors.

Basically Internet mail has one major minus compared to other corporate mail systems such as Microsoft Mail or Novell GroupWise. Internet Mail lacks integrated directory services. Just try to get all your marketing people to understand and remember that President Bill Clinton is There is no way they will remember. Now, when we are talking about tens or even hundreds of people, the problem grows. This is why there are dozens of Mail Systems that provide much a much better user interface than Internet Mail. If your company uses some Corporate mail then in order to connect to the Internet you will need a mail gateway between the SMTP world and your Mail system. Almost all mail systems give you these gateways. On other hand you may want to use only Internet Mail since it is much cheaper than any other Mail system.

Few months ago PC Magazine made some tests, like they always do and figured out that from 1000 mails they've sent about 200 didn't reached intended receiptents. That made them claim that Internet mail does not work well. That is not true, Internet mail is very safe, much safer than most other mail systems. The only point is that you need to know how to configure to do that right. My personal statistics says that 50-70% of Internet sites has incorrectly configured mail servers - that in case of various failures makes about 10%-20% of address to be unsafe. From my own statistics - I am running Windows NT on the Internet Mailing List, that with its growth showed me that about 20% of subscribers addresses are invalid. Also when I've put an option to reconfirm subscription, by requiring to send additional mail, when subscription made, I've saw in the logs that about 40% of people failed to subscribe, either because of misconfigured mail system or because the illiterateness.

There is a lot of domains in internet that has only one name server, or what you can more frequently few name servers at the same place. That is bad. One thing that RFC 920 says (pay attention that this is a requirement, you don't drive your car without license right?) is that every domain have to has at least 2 name servers that has no common point of failure. It is extremely important that you would have some side, on other side of the Internet, that does not sit on the same electricity line as you. And has separate communication line to Internet. What happen if you don't have such server, that means that the moment that your network or electricity fails, or just have to take everything down, because of urgent administrative reason all lookups all your domain will fail, and most of the mail that was send to you at this moment will be returned to the sender. It hard to believe, but that simple point that was explained hundreds of times is so hard to understand that at least 20-30% of the domains are configured incorrect.

Pay attention that one secondary name server at your Internet Service Provider is not enough. I bet that you share same communication line with them, that means that the moment their line fails, both them and you are disconnected.

Now to Mail eXchanger (MX) records. Even when your Name Servers are configured correctly you can still have a lot of problems. Its funny but the main problems show up because of UNIX and almost never show up on NT. It is very important to configure MX records for your host. Otherwise every time that you have e-mail sent to host with no MX record it depends on other party how it would handle it.

What is MX record? You should specify in DNS list of machines that would handle mail for your site by precedence. I would recommend to put there 2 machines only - one is machine that should receive all mail for you and second that has as better Internet Connectivity and is as much time up as you can find. Its preferable that this will be some large mail relay at your ISP or university. What is the concept of mail relay? Regular hosts are turned on and connected to the Internet only part of time, starting from few hours only. Even large Internet mail servers at your office are can be only part of the time on the air, unless you have very good reason for other. Now, when one machine that is only part time up, send mail to other machine that also is only part time up, there a lot of time when only one of them is up. Also a lot of machines are up, but have their Internet links fail. The solution is to define some machine that you know you can trust it - both in administrative way and that it will be up and not fail. Because the moment the mail gets to it, your mail under their control, they can read it, delay it or throw it away. So pick up this machine good, something like Name Server of your country, or large ISP.

In the UNIX world usually each machine runs Sendmail, but does not have configured MX records for it, what happens, when you logged in on that machine and send mail, the mail go out with name of the host prepended, like user@host.domain, thought, when somebody will reply you your mail would be sent to unconfigured machine, and if this machine will be down for a few days, or have its Sendmail down, your mail will be lost.

System Administrators:
Please configure your users to have site mail addresses and not host ones, that will simplify life for all us.


The times when you needed a UNIX box to run SMTP Mail are over. NT has few very good mail systems and relays that would do all the job for you.

You have few options to decide what e-mail system you prefer. There are 3 components to decide to what to go on:

Lets look at three most common scenarios from simplest to most advanced:
  1. POP server, waiting for connection to forward mail to ISP for delivery
  2. Corporate Mail (Ms Mail, cc:Notes)+SMTP Gateway, forwards to ISP for delivery
  3. POP server/Corporate Mail, SMTP server handles/delivers all mail on its own
This is comparation chart between above methods.

Required Connectivity Delivery Speed Control Security Price
1 Partial Few hours/days Very small Small Cheap
2 Partial/Good Few minutes/hours Medium/Good Small/Medium Cheap/Medium
3 Very Good Few minutes Good/Excellent Medium/Good Medium/Expensive

So what you should decide on is User Agent part and Delivery-Receiving module. User agent is generally chosen by personal taste of the user. Many people like Eudora. Microsoft Mail or Microsoft Exchange is also popular. There are tens mailers on the market. One important approach is as I've already mentioned to integrate Internet Mail and Corporate Mail System. In this case you would use your Corporate reader like Ms Mail or GroupWise to read all your mail, both regular and Internet.

The important concept to understand is the Delivery Module, which is the most important in the Internet mail world. Once message got from You to your mailer and from it to you mail server, the message should be delivered to the final receiptent. The is important task, that requeires direct Internet connection, and well configured software. This job in UNIX is done by program called Sendmail. Generally you need maintain your own Mail Delivery module or SMTP Relay, when you got serious mail load and want to control mail flow by yourself.

The delivery should be handled at ISP side until you have good persistent connection. Also SMTP Relay management requires very good understanding of SMTP concepts so don't transfer mail delivery to your side until, you feel very sure about how it should function. There is few excellent SMTP Mail Relays for NT. The one that is also used on the UNIX systems is Post.Office by Inc. A lot of NT'ers like NTMail developed by Brian Dorricot at Net-Shopper. These 2 products provide you with virtually every feature you would ever dream about. MetaInfo Inc. has ported UNIX Sendmail to NT, however I didn't hear if somebody is using it. EMWAC is developing IMS - Internet Mail Services, which is very simple as their HTTPs, but seems to work. IRISoft has released Mi'Mail NT Server which claims to be very featured, I didn't check it up personally however and I'll be glad to get some feedback from some that did.

There is product called UUPC for NT that allows you to connect, elder UUCP style. This should no be needed until you have either already deployed applications based on UUCP or your ISP requires you do use it.

So if you decide to go on POP client like Eudora or Exchange, just got one of above packages and all software you need is there. But what with Corporate Mail?

Standard approach of connecting 2 different mail systems is Mail Gateway. In case we are duscussing, Mail Gateway is a dumb peace of software that connected to both Internet and Corporate mail, and relays messages between two. All Corporate Mail systems have SMTP Gateways, that will enable to connect your Lotus Notes, Group Wise or Ms Mail to Internet. There is also few third party products that as usual do their job better than original, check out Very nice Post Union SMTP Mail Gateway, this unique product is all in one SMTP Mail relay and gateway between all major Corporate Mail Systems.

In Conclusion I want to tell story that happen to me at the time I was working on this document.

Some good guy send e-mail to mailing list and attached executable file to it of size 2.7MB. This mail were distributed to 300 people... Well, it wasn't exactly, after first 100 (100 users * 2.7 MB = 270MB) the disk space on my servers ended and mail delivery for entire office stopped. Also I am afraid to think about poor 100 subscribers that have had to retrieve this huge mail via their slow modem links, thinking about what should they do with that useful file that they have never ordered.

An old wise Russian saying tell you to think 7 times before executing. So applied to Internet I would like to say again that Mail configuration is the one of the most difficult issues in Internet, so please, please, please read more material on it. Don't make a site that will make other people curse you.

Running FTP Server

Windows NT comes with a fairly good FTP server. This server is very good as a production, mission-critical server installed on a dedicated machine or on a machine with dedicated disks for FTP.

However this server has a few major problems, making its unusable for small sites. It lacks detailed logging. The logging facility of the Microsoft FTP Server is optimized for high loads and automatic scripts but lacks details.

More important, its security features are not good at all. Microsoft requires you to run the FTP Server under the SYSTEM account, thus exposing your NT Machine to intruders and making you rely on Microsoft programmers for your security. Well we know that most Microsoft programs do not contain any bugs and are very robust! (Try to run M$'s best selling Windows for more than a day - if you don't run anything besides the screen saver, you can hope you machine won't crash, and even that depends on the screen saver!).

The another security problem is that the FTP Server uses the same user database as NT itself. And it is not necessary true that you want to grant users from the Internet the same permissions as users from in-house. There are a few alternatives to the Microsoft FTP Server on the market. For instance try WFtpD.

After introduction of Microsoft's Internet strategy, and IIS release things changed slightly. IIS provides more powerfull security concept, so it will be easier for you to setpu FTP server using it.

Running WWW Server

Choosing WWW server is not an easy issue, there are tens of various Web Servers on the market, each with its own good and bad sides. It is very hard to be comprehensive in this area, since things are changing very fast. You can take a look at WebCompare Server Features Comparation in order to pick up you favourite.

I'll mention there only few that have something special.

Before all the classical WWW Server is Netscape's family of http servers. These servers are recognized leaders on this market. They are not cheap, but quality never comes for free.
IIS - Microsoft Internet Information Server
This server is part of new Microsoft's Internet strategy, and is moving toward being one of the best servers on the market. Version 1.0 can be downloaded for free from Microsoft's WWW server. Version 2.0 is going to be intergal part of Windows NT and will be shipped inside Windows NT 4.0.

Running NNTP Server

NNTP or USENET News servers were traditionally part of the Internet. Running a News Server is not a simple task. Running a full NNTP feed will cost you a lot of money and it's not easy to administer. However, NNTP servers are very good for in-house communications and discussion groups, as opposed to e-mail that requires you to pay attention, NNTP newsgroups are a passive way for information exchange. It's a good idea to setup a few local groups specific to your company/organization and not to redistribute them to the world.

There is a very good and fast NNTP server for NT called NNS, written by Jeck Coffler and recently purchased by NetManage. NNS is not UNIX's INN port, but a stand-alone server conforming to NNTP. NetManage is going to release a comercial version of NNS, but the free one is available too from them. If you would like to contact NetManage regarding this product, feel free to send e-mail to

Another NNTP server for Windows NT is made by NetWin and also very wide spread DNEWS it supports all security features of NNTP that were not supported originally by freware version of NNS.

Origin of Windows Networking and WINS Servers

Windows NT, Windows for Workgroups and Windows 4.0 (95) networking all came from an older Microsoft/IBM product called LAN Manager. LAN Manager itself is still used in some places such as IBM OS/2 and LAN Manager for UNIX.

This networking is based on SMB protocol and interfaced via NetBIOS. Once NetBIOS ran only on top of its own protocol that was called NetBEUI. This protocol was designed without many modern requirements in mind and is has two major limitations. First NetBEUI is non-routable, making its undesirable for large networks. A second limitation, which is not well publicized by Microsoft, is that NetBEUI has problems in its basic implementation and does not work well on networks with a high load. Thus it's good only for very small networks with up to 10 computers and a small load.

In order to move from NetBEUI, Microsoft took a very clever step; The Networking group decided that NetBIOS could be encapsulated or as it's sometimes called, tunneled in another protocol like TCP/IP or IPX. Thus Windows can work on only TCP/IP, without NetBEUI - this is a very handy feature that is not present in systems like Apple Macintoshes, that are limited to AppleTalk, or Novell that requires IPX.

What tunneling means is that Windows/NT does not take advantage of TCP/IP specific features such as Distributed Name Servers and service port concepts. What it does use is IP routing and TCP reliable delivery which is most important.

The encapsulation, called NBT (NetBIOS over TCP/IP) is defined in RFCes 1001 and 1002, I highly recommend reviewing these papers, they explain a lot of concepts that are not included in Windows or Windows NT documentation.

The main idea is that all communications goes via two fixed ports - 138 for UDP communications and 139 for TCP. That means that in comparison with UNIX, where each service like NFS or NTP requires a separate set of ports, all NBT communications are limited to these two ports, making it very easy to monitor or block them.

The NetBIOS name space differs from the Internet DNS name space and Windows networking does not require DNS at all. Instead Windows Internetworking uses its own name server called NBNS - NetBios Name Server.

The Windows NT implementation of NBNS is called WINS - Windows Internetworking Name Server. They were called Rhino servers (this was the code name of the project in Microsoft for NBT implementation). This name was kept as the name of one of the Microsoft Internet servers for NT -, recently renamed to

The original NetBEUI was based very heavily on broadcasts, but for routed environments this doesn't work and also broadcasting wasted bandwidth on the local subnets. The old solution for this was the LMHOSTS file that required every computer to have a manually maintained name database. This was fixed by using WINS servers that perform this task automatically. Using WINS is as simple as installing the package and specifying WINS Server address to hosts either on the hosts themselves or at the DHCP Server. The immediate benefit from WINS Server is enabling computers located on different sides of a router to see each other. The second benefit is avoiding broadcasts.

In order to complitely eliminate name resolution and name registering via broadcasting you should change NetBIOS node names from h-node to p-node.

One of the additional feature of WINS is that it allows dynamic addressing. When a host acquires a new address via DHCP or from the System Administrator it automatically registers itself at its WINS server. This approach is in contrast to the one used in TCP/IP DNS's when new hosts are registered by the administrator, manually.

Securing Windows NT Networks from Internet

This part is mostly relevant to networks that have a full connection to the Internet and using TCP/IP as its internmal backbone protocol. However when administering partially connected networks, or using IPX or reserved (unroutable) TCP/IP addressses you need to undestand of the nature and the concepts of security issues very well too.

It is recommended to read Origin of Windows Networking and WINS Servers in order to understand best this chapter.


  1. Network Layer Security
  2. Windows NT Security Model in the Public Environment
  3. Security Issues and Running Network Server Programs
  4. Securing Windows 3.1X and Windows 95 stations
  5. Partially Connected Networks

Network Layer Security

First of all you must understand what does it mean that your network (or computer) is connected to the Internet. Internet is not an information service such as CompuServe or AOL. The main idea nehind the Internet is to function as abstraction layer between two computers connected to it. Internet isolates numerous interconnected single networks of which it consists and acts like one big network with millions of computers. That means that most functionality between 2 computers on your LAN is available to arbitary computer on internet and you own computer. It is extremely important to understand this concept.

So building non comprimising security involves 2 parts.

  1. Configuring you stations and servers of against unauthorized access.
  2. Creating powerful filters between you network and the Internet so that will able to prevent monitor undesired access.
When operating a network inside your organization you ususally you would care much less about security, since you will trust more to the people in your organization. For this reason it is possible that you stations have disk shares without or with easy guessing passwords, and your administrative password is known to lot of people and servers file system is not secured with appropriate persmissions.

If your site does not have too much attractions to masses, which in 95% of the cases is correct, most chances that you would like to secure your network against ex-workers, or competitors. This means that you would like to prevent them from access to network resources from outside, when they are located at home or some other place. Juridictually it is much harder to prove, that damage was done by some specific person if ot originated out of your organization. People can claim that somebody pretended them or used their home computer without their permission.

Shortly this means that you have to isolate your network this way that anybody, including you will not be able to access your network resources from outside, even if he knows about your network infrastructure, passwords, server names, login names or any other important information.

Classical approach for this is firewall, and indeed this is ideal solution for sys-admins that do not understand too much about the system they manage. But declaring - "disable everything, than open enable needed", firewall aproach will prevent any unwanted access. However firewall has two major problems.

  1. Cost - firewall costs a fortune.
  2. By design firewall prevents you network to access in convinient way the Internet. If you have some extremely important information or paranoid boss, may be you will have to choose this way, otherwise, let your users enjoy the Internet without restrictions.
Fortunately the nature of Windows networking over TCP/IP allows to to isolate your network very easy. Windows NT uses TCP/IP not in the natural way, but in way called "tunneling", this means that all traffic goes thru 2 single ports, so if you own a router, just enable packet filtering. By disabling all traffic on 138 UDP and 139 TCP ports you will prevent all NetBIOS traffic between your net and the Internet effectively isolating your site from rest of the net. NetBIOS is the protocol that Windows uses for all network communications, both RPC and file sharing functions.

Packet filtering is very important feature that you should use in order to block other common use ports too. Think about some of your users installing ftp or telnet server, or worse if you have unix stations running unsecure sendmail, or telnet daemon. If somebody will manage to get to your network, one of the clasical things to do is to install telnet daemon, or open an account for existing one, in order to gain shell access. Another danger is NFS, which is counted as second to nothing in amount of its security holes.

All this is very easy to prevent, block all incoming and/or outcoming traffic these ports:

Service NamePort/TypePort NameDirection

Don't forget to enable traffic to your designated ftp and mail server. Blocking ports thus will prevent hackers from the conventional entering your network from outside, thus making it tens times more complicate for hackers to find easy hole for file system from . This will also prevent your local users from opening an easy back door to your net.

What is the next? How should you make your own servers secure? I'll discuss this later.

Windows NT Security Model in the Public Environment

Filtering your traffic should be supported from the network by internal security models. In the worse case if you router will be breaked or due to bug in routing software or with help of some durty trick with help from inside the next layer to break will be internal network security.

The concepts listed discussed here, are made with strong accent case, when people would like to hack your system from outside. These guidelines don't contain comprehensive solution for internal security system, but comes to build layered protection from outside world. Look for NT documentation, how to protect from internal users. You can read also Somar Software's Windows NT Security Issues that contain general hints about tuning your Windows NT station's security policies.

NT comes with a lot of exellent built in polices that help a lot to you in this protect from outside world. Number of steps should be made:

Security Issues and Running Network Server Programs

This section will be finished in some other day...

Securing Windows 3.1X and Windows 95 stations

This section will be finished in some other day...

Partially Connected Networks

This section will be finished in some other day...

Recommended Reading

Douglas E. Comer. 3 Volumes of Internetworking With TCP/IP. Prentice-Hall.
I would recommend this book for everybody who is intrested in technical reading about TCP/IP Internet. Volume I is very good for both new and intermidiate TCP/IP Users. Volume II is for intermiddiate to advanced TCP/IP Programmers.
Microsoft Corp. Microsoft Windows NT Resource Kit, 5 Volumes. Microsoft Press.
These 5 books plus CD-ROM with utilities and online documentation is sold at the cost of the paper it is printed on. This is a required reference for every Windows NT Administrator and Programer.

[John Neystadt WWW] [Windows NT on the Internet]
HTML HaL Mozilla Checked! Last modified 2:30PM 6/17/96 Created by John Neystadt