Also I want to thank all people in NTMail-Discuss and Web-ServerNT mailing lists that passively donated by their questions and answers to the content of this document.
In order to create a convinient way for discussion Windows NT as Internet platform I run Windows NT in Internet Environment mailing list. This list is dedicated strictly to administering Windows NT in TCP/IP networks.
Well what makes up a site?
Windows NT comes with a built-in TCP/IP stack that functions very well. However there are alternative TCP/IP stacks on the market. Windows for Workgroups and Windows 95 also have Microsoft TCP/IP stacks for them.
NT has a unique ability to backbone (operate natively) with multiple protocols including TCP/IP compared to Novell NetWare that can run only IPX. That means that you can have TCP/IP as the only protocol in the network if you don't have to connect to Novell Servers. Another alternative is to use IPX or NetBEUI internally to your LAN. Both methods have their pros and cons. Using IPX for internal communications is safer - you open fewer points for break-ins from the Internet. But the major con is that each stack uses a lot of memory in each workstation and also degrades performance. The alternative is to use TCP/IP as the only protocol on all stations in the network. This option is also the best because every it uses less memory. TCP/IP is a very robust protocol. Microsoft put a lot of effort into their implementation, because they are using TCP/IP on their internal networks.
One point that a lot people neglect is tuning the SWITCH.INF file that contains scripts for automatic dialing. Don't give up. I've managed to make it work you can too. Ultimately it was written only by Microsoft Programmers. Think what could it happen if Gates would incorporate his famous BASIC engine inside.
This link contains an NT Internet FAQ. It's a little bit outdated but will help you to setup the Windows NT RAS link.
You will not be able to connect your entire network to the Internet unless your ISP (Internet Service Provider) has configured this function on his side. A lot of people ask what they are doing wrong and it turns out to be at their ISP.
If you want to connect at higher speeds or have more control over your connection you will need a dedicated router such as the ones made by Cisco Systems. You can also benefit from a router in other ways by using it for your internal network. The simplest CISCO router costs about $2000 and is a relatively expensive solution.
A router will give you a lot of options for packet filtering (security), line load control (backup dial lines in case of failures) and few other features that will contribute to the reliability of your connection. The most important thing is the packet filtering. By disabling all traffic on 138 UDP and 139 TCP ports you will prevent all NetBIOS traffic between your net and the Internet effectively isolating your site from intruders.
Router also maintains tables that allows you to build more sophisticated setups than a single gateway to a single Internet provider.
There are FrameRelay Cards available that allow you to connect a wide serial line directly to a computer, however the lack of full control over the routing process makes this solution less than the best. The main benefit is that this can be much cheaper than a hardware router. Another benefit of a hardware router is that routers are generally much more difficult to break in to than computers.
Microsoft released an add-on for NT called Multi-Protocol Router that allows you to build and maintain routing tables at NT hosts, it supports both RIP protocol and IPX routing. However it does not allow packet filtering.
A comparison chart among different routing solutions by power/customization level, speed, security and price:
Power/Customization | Speed | Price | Security | |
---|---|---|---|---|
Modem with RAS | None | Slow | Cheap | Low/Less Important (1) |
Hardware Router | High | Fast | Expensive | High (2) |
FrameRelay Card | None | Fast | Medium | Low |
(1) Usually networks using slow modem links are not popular sites on the Internet and do not contains a large volume of resources. But, those networks are less exposed to the public, however, which reduces the security risk.
(2) Routers must be configured properly in order to function as security guards.
As you know group of subnetted TCP/IP networks share a number of parameters that must be the same across all hosts on the LAN to enable proper communications. Each host should at least have:
In the case where you administer a network with tens or even hundreds of hosts, each of them should be configured consistently in order to allow even basic communication with other hosts. For this reason the traditional nightmare of every network administrator has been the need to change something like the network number, what generally can happen in one of these cases : you change to a new location or ISP, your ISP acquires a new CIDR of addresses, or your network outgrows the original address range.
So it is clear that everyone would benefit from centralizing the registration of these settings and the allocation IP addresses. The Internet has a number of protocols for this purpose, such as BOOTP, but only DHCP really eases this task.
The administration of a DHCP Server is very simple - just install a DHCP server on one of the NT computers on your network. Specify global settings that will be shared among all the hosts. Define the IP address allocation algorithm and then all you need to do on the host computers is to tell them to configure themselves from the DHCP servers. The simplicity of DHCP is amazing. I needed to change the address of a network with about 30 Windows NT and Windows for Workgroups computers. It took me about 4 minutes to tell everyone to shut their computers down, to change the network number on the server, and to restart the server and tell every one to turn their computers on.
Nobody believed that this was possible. Furthermore, when installing new routers or changing DNS scenarios I just changed DHCP settings and all the hosts reconfigured themselves totally on their own.
You may say that all this is very nice but what about computers that require fixed IP addresses? Various servers - DNS, SMTP Relays, and even regular computer that you want to have fixed names for. There is no dynamic name resolution, DNS cannot help, (this has been discussed for a long time, but without a solution yet), and computers such as Name Servers require fixed addresses anyway. The solution for this is "reservations". You preallocate on the DHCP Server, IP addresses for specific computers based on the serial number of their MAC interface (i.e. Ethernet number). This allows you to define some or all of your computers with fixed IP addresses. The configuration becomes less flexible, and requires more time to reconfigure, but it is still better than defining IP addresses and configurations on each host - all addresses are configured and maintained on a single computer.
I recommend that you read more about DHCP servers in the Windows NT Resource Kit books.
If you are interested in getting a copy of free BIND port to NT email access@drcoffsite.com for info on getting onto an ftp site to download the files ntbind49326.zip (source) or ntdns49326bin.zip (binaries only). Note: these zip files contain long file names and you will need a unzip that supports them (i.e. info-zip, or download the unzip.exe) on the same ftp site.
Here are links to 2 documents that should answer all you specific BIND questions.
There are alternative implementations of DNS the system besides BIND, but they usually either do not conform to all standards or give you only a subset of BIND's functionality. Microsoft is working on a DNS implementation for NT that will incorporate dynamic (WINS) address resolution and other NT specific stuff, but it's in beta already for almost a year and it seems they have put it on hold. You can try downloading the beta, but speaking as one who has checked it, I would not recommend using it due to its instability and protocol non-compliance.
If you use only NetBIOS Networking - file operations or Client/Server applications written with NetBIOS in mind, you will never face the problem with name resolution of dynamic addresses. However using TCP/IP aware programs such as Netscape, ORACLE SQLNet (with TCP/IP transport), NFS for NT or even simple TCP/IP utilities such as TRACERT (traceroute in UNIX) you do need DNS and this becomes a problem. Microsoft is working on a special version of DNS for NT that would support dynamic resolution through WINS, they are saying that it will be part of NT 4.0, but there is no complete solution by now.
The only thing you can do is have your important computers use fixed IP addresses with DHCP reservation. Then you can register these computers in the DNS. This is neither ideal nor a convenient solution, but this is the only thing you can do for now.
Traditionaly Internet Service Providers have been using Terminal Servers for this aim. For example one model of CISCO routers can be used for this aim. Another company on this market is Shiva that associated with Remote Access. As other companies Microsoft needed in dial-in for their workers, Currently at their headquaters they have dial-in with more than 50 lines. Instead using standard existing terminal server Microsoft implemented one for NT. This package called Remote Access and comes with every NT Server and Workstation. It contains 2 parts. One already mentioned is used for dial out and other, called Remote Access Server serves for Dial-In. The Version that comes with Workstation is limited for one dial-in port and Server's allows up to 255 dial-in ports on one station. Regular PC board will not allow you to have mor that 4 COM ports, so with out multi-port card you will not be able to create server with larger number of lines. One of manufactures making multi-port cards is Digi International. The DigiBoard cards work excellent with Windows NT.
Few months ago PC Magazine Labs tested and elected the NT Remote Access as the best tool on the market in this area. NT Remote Access Server supports all standard protocols including TCP/IP, IPX and NetBEUI over PPP, and all standard authentication protocols, including PAP, SPAP and CHAP. Remote Access Server allows to use both dynamic and static IP addresses and can work either with DHCP server or with static pool of addresses. Note however that if you will not use DHCP Server, clients will not be able to acquire automatically other TCP/IP parameters like DNS and WINS addresses. Clients dialing can be either allowed to access the whole network and via it have access to Internet or limited to the Server itself. One of the most handy features of RAS is it callback feature. RAS has option either to use the user initiated connection to to call client back or by client given phone number or by one preconfigured by administrator. The last is excellent security solution. Server that only connects to clients by calling them at predefined phone number is virtually unbreakable.
RAS logs all connects and disconnects together with session statistics about in the Event Log, what could be used latter by Service Providers to calculate bills. Internet-Shopper privides a libray of RAS Software for Windows NT. All this makes Remote Access Server excellent solution for in house dial-in. It seems however that RAS still lacks few features in order to be used by ISP's. For exampe there is no way to prevent multiple login's of the same user. Also RAS connections can't trigger route updates what is important if you wish to connect LANs via RAS.
There are a lot of different mail scenarios that could be created. Which one you want to use depends on the number of your users and offices, on connectivity of your site to the Internet and a lot of other factors.
Basically Internet mail has one major minus compared to other corporate mail systems such as Microsoft Mail or Novell GroupWise. Internet Mail lacks integrated directory services. Just try to get all your marketing people to understand and remember that President Bill Clinton is billc@whitehouse.gov. There is no way they will remember. Now, when we are talking about tens or even hundreds of people, the problem grows. This is why there are dozens of Mail Systems that provide much a much better user interface than Internet Mail. If your company uses some Corporate mail then in order to connect to the Internet you will need a mail gateway between the SMTP world and your Mail system. Almost all mail systems give you these gateways. On other hand you may want to use only Internet Mail since it is much cheaper than any other Mail system.
Few months ago PC Magazine made some tests, like they always do and figured out that from 1000 mails they've sent about 200 didn't reached intended receiptents. That made them claim that Internet mail does not work well. That is not true, Internet mail is very safe, much safer than most other mail systems. The only point is that you need to know how to configure to do that right. My personal statistics says that 50-70% of Internet sites has incorrectly configured mail servers - that in case of various failures makes about 10%-20% of address to be unsafe. From my own statistics - I am running Windows NT on the Internet Mailing List, that with its growth showed me that about 20% of subscribers addresses are invalid. Also when I've put an option to reconfirm subscription, by requiring to send additional mail, when subscription made, I've saw in the logs that about 40% of people failed to subscribe, either because of misconfigured mail system or because the illiterateness.
There is a lot of domains in internet that has only one name server, or what you can more frequently few name servers at the same place. That is bad. One thing that RFC 920 says (pay attention that this is a requirement, you don't drive your car without license right?) is that every domain have to has at least 2 name servers that has no common point of failure. It is extremely important that you would have some side, on other side of the Internet, that does not sit on the same electricity line as you. And has separate communication line to Internet. What happen if you don't have such server, that means that the moment that your network or electricity fails, or just have to take everything down, because of urgent administrative reason all lookups all your domain will fail, and most of the mail that was send to you at this moment will be returned to the sender. It hard to believe, but that simple point that was explained hundreds of times is so hard to understand that at least 20-30% of the domains are configured incorrect.
Pay attention that one secondary name server at your Internet Service Provider is not enough. I bet that you share same communication line with them, that means that the moment their line fails, both them and you are disconnected.
Now to Mail eXchanger (MX) records. Even when your Name Servers are configured correctly you can still have a lot of problems. Its funny but the main problems show up because of UNIX and almost never show up on NT. It is very important to configure MX records for your host. Otherwise every time that you have e-mail sent to host with no MX record it depends on other party how it would handle it.
What is MX record? You should specify in DNS list of machines that would handle mail for your site by precedence. I would recommend to put there 2 machines only - one is machine that should receive all mail for you and second that has as better Internet Connectivity and is as much time up as you can find. Its preferable that this will be some large mail relay at your ISP or university. What is the concept of mail relay? Regular hosts are turned on and connected to the Internet only part of time, starting from few hours only. Even large Internet mail servers at your office are can be only part of the time on the air, unless you have very good reason for other. Now, when one machine that is only part time up, send mail to other machine that also is only part time up, there a lot of time when only one of them is up. Also a lot of machines are up, but have their Internet links fail. The solution is to define some machine that you know you can trust it - both in administrative way and that it will be up and not fail. Because the moment the mail gets to it, your mail under their control, they can read it, delay it or throw it away. So pick up this machine good, something like Name Server of your country, or large ISP.
In the UNIX world usually each machine runs Sendmail, but does not have configured MX records for it, what happens, when you logged in on that machine and send mail, the mail go out with name of the host prepended, like user@host.domain, thought, when somebody will reply you your mail would be sent to unconfigured machine, and if this machine will be down for a few days, or have its Sendmail down, your mail will be lost.
System Administrators:
Please configure your users to have site mail addresses and not host ones, that will simplify life
for all us.
You have few options to decide what e-mail system you prefer. There are 3 components to decide to what to go on:
Required Connectivity | Delivery Speed | Control | Security | Price | |
---|---|---|---|---|---|
1 | Partial | Few hours/days | Very small | Small | Cheap |
2 | Partial/Good | Few minutes/hours | Medium/Good | Small/Medium | Cheap/Medium |
3 | Very Good | Few minutes | Good/Excellent | Medium/Good | Medium/Expensive |
So what you should decide on is User Agent part and Delivery-Receiving module. User agent is generally chosen by personal taste of the user. Many people like Eudora. Microsoft Mail or Microsoft Exchange is also popular. There are tens mailers on the market. One important approach is as I've already mentioned to integrate Internet Mail and Corporate Mail System. In this case you would use your Corporate reader like Ms Mail or GroupWise to read all your mail, both regular and Internet.
The important concept to understand is the Delivery Module, which is the most important in the Internet mail world. Once message got from You to your mailer and from it to you mail server, the message should be delivered to the final receiptent. The is important task, that requeires direct Internet connection, and well configured software. This job in UNIX is done by program called Sendmail. Generally you need maintain your own Mail Delivery module or SMTP Relay, when you got serious mail load and want to control mail flow by yourself.
The delivery should be handled at ISP side until you have good persistent connection. Also SMTP Relay management requires very good understanding of SMTP concepts so don't transfer mail delivery to your side until, you feel very sure about how it should function. There is few excellent SMTP Mail Relays for NT. The one that is also used on the UNIX systems is Post.Office by Software.com Inc. A lot of NT'ers like NTMail developed by Brian Dorricot at Net-Shopper. These 2 products provide you with virtually every feature you would ever dream about. MetaInfo Inc. has ported UNIX Sendmail to NT, however I didn't hear if somebody is using it. EMWAC is developing IMS - Internet Mail Services, which is very simple as their HTTPs, but seems to work. IRISoft has released Mi'Mail NT Server which claims to be very featured, I didn't check it up personally however and I'll be glad to get some feedback from some that did.
There is product called UUPC for NT that allows you to connect, elder UUCP style. This should no be needed until you have either already deployed applications based on UUCP or your ISP requires you do use it.
So if you decide to go on POP client like Eudora or Exchange, just got one of above packages and all software you need is there. But what with Corporate Mail?
Standard approach of connecting 2 different mail systems is Mail Gateway. In case we are duscussing, Mail Gateway is a dumb peace of software that connected to both Internet and Corporate mail, and relays messages between two. All Corporate Mail systems have SMTP Gateways, that will enable to connect your Lotus Notes, Group Wise or Ms Mail to Internet. There is also few third party products that as usual do their job better than original, check out Very nice Post Union SMTP Mail Gateway, this unique product is all in one SMTP Mail relay and gateway between all major Corporate Mail Systems.
In Conclusion I want to tell story that happen to me at the time I was working on this document.
Some good guy send e-mail to mailing list and attached executable file to it of size 2.7MB. This mail were distributed to 300 people... Well, it wasn't exactly, after first 100 (100 users * 2.7 MB = 270MB) the disk space on my servers ended and mail delivery for entire office stopped. Also I am afraid to think about poor 100 subscribers that have had to retrieve this huge mail via their slow modem links, thinking about what should they do with that useful file that they have never ordered.
An old wise Russian saying tell you to think 7 times before executing. So applied to Internet I would like to say again that Mail configuration is the one of the most difficult issues in Internet, so please, please, please read more material on it. Don't make a site that will make other people curse you.
However this server has a few major problems, making its unusable for small sites. It lacks detailed logging. The logging facility of the Microsoft FTP Server is optimized for high loads and automatic scripts but lacks details.
More important, its security features are not good at all. Microsoft requires you to run the FTP Server under the SYSTEM account, thus exposing your NT Machine to intruders and making you rely on Microsoft programmers for your security. Well we know that most Microsoft programs do not contain any bugs and are very robust! (Try to run M$'s best selling Windows for more than a day - if you don't run anything besides the screen saver, you can hope you machine won't crash, and even that depends on the screen saver!).
The another security problem is that the FTP Server uses the same user database as NT itself. And it is not necessary true that you want to grant users from the Internet the same permissions as users from in-house. There are a few alternatives to the Microsoft FTP Server on the market. For instance try WFtpD.
After introduction of Microsoft's Internet strategy, and IIS release things changed slightly. IIS provides more powerfull security concept, so it will be easier for you to setpu FTP server using it.
I'll mention there only few that have something special.
There is a very good and fast NNTP server for NT called NNS, written by Jeck Coffler and recently purchased by NetManage. NNS is not UNIX's INN port, but a stand-alone server conforming to NNTP. NetManage is going to release a comercial version of NNS, but the free one is available too from them. If you would like to contact NetManage regarding this product, feel free to send e-mail to news_server@netmanage.com.
Another NNTP server for Windows NT is made by NetWin and also very wide spread DNEWS it supports all security features of NNTP that were not supported originally by freware version of NNS.
This networking is based on SMB protocol and interfaced via NetBIOS. Once NetBIOS ran only on top of its own protocol that was called NetBEUI. This protocol was designed without many modern requirements in mind and is has two major limitations. First NetBEUI is non-routable, making its undesirable for large networks. A second limitation, which is not well publicized by Microsoft, is that NetBEUI has problems in its basic implementation and does not work well on networks with a high load. Thus it's good only for very small networks with up to 10 computers and a small load.
In order to move from NetBEUI, Microsoft took a very clever step; The Networking group decided that NetBIOS could be encapsulated or as it's sometimes called, tunneled in another protocol like TCP/IP or IPX. Thus Windows can work on only TCP/IP, without NetBEUI - this is a very handy feature that is not present in systems like Apple Macintoshes, that are limited to AppleTalk, or Novell that requires IPX.
What tunneling means is that Windows/NT does not take advantage of TCP/IP specific features such as Distributed Name Servers and service port concepts. What it does use is IP routing and TCP reliable delivery which is most important.
The encapsulation, called NBT (NetBIOS over TCP/IP) is defined in RFCes 1001 and 1002, I highly recommend reviewing these papers, they explain a lot of concepts that are not included in Windows or Windows NT documentation.
The main idea is that all communications goes via two fixed ports - 138 for UDP communications and 139 for TCP. That means that in comparison with UNIX, where each service like NFS or NTP requires a separate set of ports, all NBT communications are limited to these two ports, making it very easy to monitor or block them.
The NetBIOS name space differs from the Internet DNS name space and Windows networking does not require DNS at all. Instead Windows Internetworking uses its own name server called NBNS - NetBios Name Server.
The Windows NT implementation of NBNS is called WINS - Windows Internetworking Name Server. They were called Rhino servers (this was the code name of the project in Microsoft for NBT implementation). This name was kept as the name of one of the Microsoft Internet servers for NT - rhino.microsoft.com, recently renamed to internet.microsoft.com.
The original NetBEUI was based very heavily on broadcasts, but for routed environments this doesn't work and also broadcasting wasted bandwidth on the local subnets. The old solution for this was the LMHOSTS file that required every computer to have a manually maintained name database. This was fixed by using WINS servers that perform this task automatically. Using WINS is as simple as installing the package and specifying WINS Server address to hosts either on the hosts themselves or at the DHCP Server. The immediate benefit from WINS Server is enabling computers located on different sides of a router to see each other. The second benefit is avoiding broadcasts.
In order to complitely eliminate name resolution and name registering via broadcasting you should change NetBIOS node names from h-node to p-node.
One of the additional feature of WINS is that it allows dynamic addressing. When a host acquires a new address via DHCP or from the System Administrator it automatically registers itself at its WINS server. This approach is in contrast to the one used in TCP/IP DNS's when new hosts are registered by the administrator, manually.
It is recommended to read Origin of Windows Networking and WINS Servers in order to understand best this chapter.
So building non comprimising security involves 2 parts.
If your site does not have too much attractions to masses, which in 95% of the cases is correct, most chances that you would like to secure your network against ex-workers, or competitors. This means that you would like to prevent them from access to network resources from outside, when they are located at home or some other place. Juridictually it is much harder to prove, that damage was done by some specific person if ot originated out of your organization. People can claim that somebody pretended them or used their home computer without their permission.
Shortly this means that you have to isolate your network this way that anybody, including you will not be able to access your network resources from outside, even if he knows about your network infrastructure, passwords, server names, login names or any other important information.
Classical approach for this is firewall, and indeed this is ideal solution for sys-admins that do not understand too much about the system they manage. But declaring - "disable everything, than open enable needed", firewall aproach will prevent any unwanted access. However firewall has two major problems.
Packet filtering is very important feature that you should use in order to block other common use ports too. Think about some of your users installing ftp or telnet server, or worse if you have unix stations running unsecure sendmail, or telnet daemon. If somebody will manage to get to your network, one of the clasical things to do is to install telnet daemon, or open an account for existing one, in order to gain shell access. Another danger is NFS, which is counted as second to nothing in amount of its security holes.
All this is very easy to prevent, block all incoming and/or outcoming traffic these ports:
Service Name | Port/Type | Port Name | Direction |
---|---|---|---|
FTP | 20/tcp | ftp-data | incoming |
FTP | 21/tcp | ftp | incoming |
Telnet | 23/tcp | telnet | incoming |
25/tcp | smtp | incoming | |
NFS | 111/tcp | portmapper | both |
NFS | 111/udp | portmapper | both |
Administration | 161/udp | snmp | both |
Administration | 162/udp | snmp | both |
Don't forget to enable traffic to your designated ftp and mail server. Blocking ports thus will prevent hackers from the conventional entering your network from outside, thus making it tens times more complicate for hackers to find easy hole for file system from . This will also prevent your local users from opening an easy back door to your net.
What is the next? How should you make your own servers secure? I'll discuss this later.
The concepts listed discussed here, are made with strong accent case, when people would like to hack your system from outside. These guidelines don't contain comprehensive solution for internal security system, but comes to build layered protection from outside world. Look for NT documentation, how to protect from internal users. You can read also Somar Software's Windows NT Security Issues that contain general hints about tuning your Windows NT station's security policies.
NT comes with a lot of exellent built in polices that help a lot to you in this protect from outside world. Number of steps should be made: