Saturday, January 29, 2011

Cannot receive local mail with postfix

I'm trying to send mail to root@localhost but no matter what I try it doesn't work. I always get DNS lookup errors:

Dec 20 10:08:56 HD-T2597CL sendmail[4408]: nBKF8uEu004408: from=root@Server1, size=451, class=0, nrcpts=1, msgid=<1261321735.4404@Server1>, relay=root@localhost
Dec 20 10:08:56 HD-T2597CL postfix/smtpd[4409]: connect from Server1[127.0.0.1]
Dec 20 10:08:56 HD-T2597CL postfix/smtpd[4409]: 075FE18080C4: client=Server1[127.0.0.1]
Dec 20 10:08:56 HD-T2597CL postfix/cleanup[4412]: 075FE18080C4: message-id=<1261321735.4404@Server1>
Dec 20 10:08:56 HD-T2597CL postfix/qmgr[3791]: 075FE18080C4: from=<root@localhost.localdomain>, size=495, nrcpt=1 (queue active)
Dec 20 10:08:56 HD-T2597CL sendmail[4408]: nBKF8uEu004408: to=root@localhost, ctladdr=root@Server1 (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30451, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (Ok: queued as 075FE18080C4)
Dec 20 10:08:56 HD-T2597CL postfix/smtpd[4409]: disconnect from Server1[127.0.0.1]
Dec 20 10:08:56 HD-T2597CL postfix/smtp[4413]: 075FE18080C4: to=<root@localhost.localdomain>, relay=none, delay=0.05, delays=0.05/0/0/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=localhost.localdomain type=MX: Host not found, try again)
Dec 20 13:57:55 HD-T2597CL sendmail[8885]: nBKIvtng008885: from=root@Server1, size=453, class=0, nrcpts=1, msgid=<200912201857.nBKIvtng008885@localhost.localdomain>, relay=root@localhost
Dec 20 13:57:55 HD-T2597CL postfix/smtpd[8686]: connect from Server1[127.0.0.1]
Dec 20 13:57:55 HD-T2597CL postfix/smtpd[8686]: 97A4618080B9: client=Server1[127.0.0.1]
Dec 20 13:57:55 HD-T2597CL postfix/cleanup[8689]: 97A4618080B9: message-id=<200912201857.nBKIvtng008885@localhost.localdomain>
Dec 20 13:57:55 HD-T2597CL postfix/qmgr[8596]: 97A4618080B9: from=<root@localhost.localdomain>, size=611, nrcpt=1 (queue active)
Dec 20 13:57:55 HD-T2597CL sendmail[8885]: nBKIvtng008885: to=chris@localhost, ctladdr=root@Server1(0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30453, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (Ok: queued as 97A4618080B9)
Dec 20 13:57:55 HD-T2597CL postfix/smtp[8690]: 97A4618080B9: to=<chris@localhost.localdomain>, relay=none, delay=0.04, delays=0.04/0/0/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=localhost.localdomain type=MX: Host not found, try again)
Dec 20 13:57:55 HD-T2597CL postfix/smtpd[8686]: disconnect from Server1[127.0.0.1]

My config is set to:

mydomain = domain.org
myhostname = mail.domain.org
myorigin = domain.org

Options using the default settings: inet_interfaces mydestination

my /etc/hosts file:

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1   Server1 localhost.localdomain localhost
::1  localhost6.localdomain6 localhost6
209.x.x.x   Server1

I have no idea what to do now...

  • check that localhost is listed under mydestination

    this is the default, but you probably have something else

    mydestination = $myhostname, localhost.$mydomain, localhost

    also check that you dont have relayhost set and that /etc/postfix/transport is empty

    check also that root is not aliased to anything in /etc/aliases

    EarthMind : I've tried your suggestion but it's still not working and before that it was set to default settings. I've tried sending mail to root@localhost, root@Server1 and different users.
    EarthMind : Oh, and my mail server does not use the 209.x.x.x IP address but another one. Does this IP need to be on the hosts file too?
  • IMHO Postfix bails out because it is confused about domain.org . Please run

    hostname -f

    and check if result:

    1.is sane (it should be Server1.domain.org . If not, tidy up the /etc/hosts as described below)

    2.resolves to valid IP address of the server (can be found in /etc/hosts)

    Again,IMHO - the "localdomain" thingy is an abomination that is good for nothing but headache and I get rid of it/replace it with real domain as soon as a server is put into my hands. I usually put into the /etc/hosts something like:

    127.0.0.1   localhost.domain.org localhost
    209.x.x.x   Server1.domain.org Server1
    

    Every name is there once and there is clear distinction between loopback and external name/address.

    From Juraj
  • After a lot of searching and testing I managed to find the solution:

    1. I switched from the OpenDNS resolvers to the ones that my webhost provides.
    2. Then I added mydestination = $myhostname, localhost.localdomain, localhost to the Postfix configuration as AlekSandar mentioned

    That made it work.

    From EarthMind

LAN Between Buildings

Right now I have two buildings across the street from one another that both have typical consumer aDSL connections (something like 12 Mbs down/1 Mbs up). When I need to access resources on the other network, I VPN into the other building and do my work. The problem is that when I'm transferring large files, it's a very slow connection. And sometimes I'm remotely working with both of those networks so simply walking between buildings with a hard drive is out of the question.

What are some best practices to solve this situation? I've considered running a fiber link between the buildings, but is that a viable option and what is there to consider when doing that (I imagine a permit from the city would be needed to dig up the street)? Wireless seems like the obvious choice, but is it reliable, and would I get a significant speed boost?

Thanks!

  • I've personally made this device Build Wifi Hi-gain Antenna and have two buildings 2km's apart able to run 44Mbit/s Wifi Connection without any problems (802.11g)

    The whole cost ended up being about $70 excluding time, but it worked and worked well for over 2 years till I needed more bandwidth and ran SC fibre between them.

    mrduclaw : This looks decidedly delicious. Thank you! Is there anywhere I can just buy one of these though? I kind of want it to "just work" since this is for a corporate environment, and to be able to call someone's tech support when it doesn't.
    Catherine MacInnes : I have seen them and therefore know that they exist, but I don't know where to buy one.
    Farseeker : Unidirectional WiFi = wonderful, if you have line of sight.
  • Note, if you use an electrical, rather than radio or optical, link (e.g. string a Cat5 cable), then you need to ensure you have electrical isolation in place on at least one end. And surge suppression on both.

    Two buildings, even next to each other, are likely to have slightly different earth potentials, this can lead to the network wire carrying current you (and the electronics at either end) definitely don't want.

    Surge supression covers things like the wire, or one building having a local surge (e.g. a lightning hit).

    Generally for a "wired" connection between buildings, even close buildings, are optical to avoid all this complexity.

    Finally, don't forget that you will need to consider a backup in case of a fault, especially as the business grows to depends on the link.

    mrduclaw : Thank you, I have read about this concern and that's why I left off electrical links in my list ( I just assumed everyone was either optical or wireless because of this ). Do you have any suggestions as to how to about actually setting up this optical link, since I think that's how we'd probably like to proceed? I've never ran any fiber optic links before, especially across a city-street.
    Richard : NOr have I :-). The simplest is probably two ethernet routers with matching optical transivers and optical fibre (there are plenty of options, so they need to match). NB. you will need to find out what local permissions you need (planning, access, ...) to string the fibre and supporting cable.
    From Richard
  • if you can - go fiber. this investment will scale nicely in next decade or two and provide much more capacity than you need today.

    if - because of costs / work needed it's out of question then wifi should do a trick.

    if you would like reasonable performance - i suggest you look some access points compatible with 802.11a. i'm using outdoor osbridge on some wireless bridge - works very well. some benchmarks [ you'll have to use google's translate from polish to english ] can be found here.

    or - even better some mikrotik platform with two such cards at each end running in nstream. this should provide you usable full-duplex link at pessimistic speed of 50-60Mbit/s even with smallest size of packets.

    setup should be very easy with two regular access-points, and slightly more challenging with mikrotik but you'll get more control over things and better performance.

    if possible - avoid 802.11b/g - those will give you lower performance.

    mrduclaw : This looks great! Thanks! Do you have any links for setting up a fiber link, as well? I've never played with fiber.
    pQd : @mrduclaw - no, not really. it depends a bit on the distance - but if it's your first time - i suggest you get external company to finish the work. you can dig a trench, or even put HDP pipe, but let the specialists handle attaching the fiber to the patchpanels on both ends. you will also need media converters or switches with SFP ports. all-in-all fiber will be more expensive for sure, but it pays back.. once set [ privded it's physicaly secure ] - it'll work fine for long time.
    Matt Simmons : When I was looking at running fiber a km, I called some local fiber contractors who were familiar with the building and office park. In your case, maybe you could talk to the business class sales office at your telco, explain the situation, and see what they can do for you. You might be surprised, unless it's Verizon. Then you'd still be surprised, just in a bad way.
    From pQd
  • I have a Customer with two buildings across the street from each other in a similiar situation. When I started working with them in 2003 we put in a fixed wireless (802.11b using Cisco Aironet gear at the time) system. It worked well enough, but we had outages when an AP failed, and a loss of performance when the neighbors all started putting up wireless Ethernet and other crud into the 2.4Ghz ISM band (yes-- we were using highly directional Yagi antennas pointed directly at each other, too). The Customer "upgraded" the system to an set of Proxim 802.11g APs in 2005 and saw very little improvement in performance.

    When the Customer started running VoIP over the link it became very clear that the link wasn't very reliable and had reasonably unpredictable latency. This traffic, combined with the increasing size of the corpus of files that we were backing-up over the "air" each night from a file server located in the "across the street" location pushed us to recommend fiber.

    We had the local cable television monopoly (Time Warner) run the fiber. For the Customer, this meant that they didn't do anything at all with permits, digging, hiring contractors, etc. The cable monopoly got the fiber run, put Ethernet switches at both ends, and told us "plug your network in to this port" on each end. That gave us a 100Mb/sec, rock-solid reliable connection between the buildings.

    For the Customer, there is a recurring month-to-month expense. Initially I was opposed to this, however the "pay off" for the Customer paying for the fiber installation themselves versus the recurring charge turned out to be roughly 3 years. The Customer's financial controller also liked the idea of a month-to-month expense, versus a capital expenditure. (You'd have to ask him why... this is Server Fault, not Obscure United States Federal Tax Code Fault...)

    Every fixed wireless link I've had the occasion to work with (all of which were installed by other contractors except for the one I've described here) have been problematic in some way or another, as compared to fiber. Fiber, once it's in the ground / air and working, works forever (unless it gets BIFF'd or shot). You'll change the electronics on the ends from time-to-time, but that usually just means an increase in speed or features. The fiber itself remains the same.

    I'd recommend strongly against running the fiber yourself. You can save a lot of time by getting together with a cabling contractor who has done this kind of work before. They'll know what permits to get, and have heavy equipment available (like directional boring machines). They can tell you, too, if your buildings are close enough together to get away with using the less expensive multi-mode fiber optic cable versus single-mode (which can cover a lot longer of a distance but is much more expensive, both for the fiber and the electronics on the end).

    Check with your local monopoly cable television provider, too. They may be able to run it for you and, depending on how long your company wants to stay in those buildings, what the month-to-month cost is compared to the cost of installing the fiber with your own contractor, and how your finance people feel about an expense versus a capitalized asset, you might found out that the monopoly provider ends up being the route to go.

    duffbeer703 : I'd vote this up 5x if I could. If you're asking a question like this, you have no business running cable across property lines. When the local gas/water/power/steam/telecom utility digs up your wire where you didn't dot the i's and cross the t's, not only will they gleefully cut it, but they will also blame your company for any issues within 50 feet of the hole.
    mrduclaw : Thank you, this is exactly what I was looking for!
  • Evan's answer is the best one for 95% of scenarios, but an edge-case solution in an area with heavy RF interference would be to use laser networking products.

    Companies like fSONA offer equipment that allows you to establish highly-reliable links up to a few miles away. These solutions work really well, but are really expensive... I think a previous employer set one of these up a few years ago to establish a network presence in a historic building quickly, because getting the permits/signoffs for cable runs was literally going to take many months, and connectivity had to be established in a small number of days.

    mrduclaw : This looks like a pretty sweet solution and I'd love to play with it. This is for a relatively small business though, so cost will weight in heavily.
  • If the buildings are literally opposite each other, you could run a cable between the two over a catenary wire. I imagine you will also need assorted permits to install one of those over a public street (I imagine there must be some kind of minimum clearance requirement), but it gives you another option.

    mrduclaw : This is something I really hadn't thought about. The buildings are the tallest in town (it's a small town), and they're twins, so it seems reasonable that the city should allow us to do this without too much issue. Thanks!
  • Ask your local ISP about the fiber. They can do a dedicated fiber for you by themselves or at least help you with a laying (don't agree on the rent of course :).

    It could be pretty cheap and can give you very good speeds. For example in the company I has been working previously, ISP techs made a 300m fiber between our offices in Berlin for about 200 Euros (~300$) onetime fee plus a price of the cable (don't remember but also not very expensive). It is even cheaper than 2 good wireless access points.

    mrduclaw : That sounds like an amazing deal. The price estimates I've found for running it myself have been astronomical in comparison, I'll be calling my ISP when they open today.
    disserman : the main idea is to call directly a local ISP company for your district. if they have something already laying or can add one more fiber, why not? of course all other companies will set a high prices because they should usually do a laying from scratch, obtain a permissions from the building owners etc. while ISP could already have everything this.
    disserman : about an astronomical prices - I remember as 10 years ago we worked with one newspaper in Ukraine (exUSSR) who had a very slow Internet in the offices and decided to purchase a dedicated fiber between the cities Lviv and Kyiv (about 470 km). well that was expensive!
    From disserman
  • In a very similar situation I used a pair of Bridgewave 60GHz radios. They cost us $15K installed (that was three years ago, so should be cheaper now) and provided a 1GigE bridge that worked with 100% availability over two years before I left that employer. Check out their website, find a reseller/installer, and you'll be very happy. The only requirement is line of sight.

    mrduclaw : I like that it can provide a 1Gb link, I don't like that it's $15k. I'll keep them in mind for the future though, as the company grows.
    ynguldyn : Keep in mind that it was a one-time expense. Break it down to the three year amortization schedule, and $400/month will not look like such a big deal. (Consider this a hint how to sell it to the beancounters.)
    From ynguldyn
  • We have several buildings around our town that are linked together with a mix of fiber and wireless. With the fiber, just a couple of media converters connected off your switch is all that is required really.

    In regards to the wireless, I'm not suggesting this brand of product is best but we've tried several very expensive brands before settling on Ubiquiti networks products for our wiress needs. The nanostations are inexpensive and run beautifully. We're using some that are outdoor and have a range of around 5km all for the price that is comparable to an indoor consumer 802.11 router. They are all very well constructed too. Recommend that you have a look. http://www.ubnt.com

    There are many others out there... wireless is the way to go between buildings if there is no preexisting copper or fiber cable and or no channel to lay it in. It's extremely expensive to cut concrete and lay fresh cable as you can well imagine.

    mrduclaw : These products actually look pretty reasonably priced. Thank you for the link!
    Matt : Amazingly, we just had a backhaul installed about 30km away on a hill around 2000ft high. The nanostations are working at that distance, with clear line of sight. They are only in the 200 milliwatt power. 16db antennas, not even parabolic. I'm blown away by the performance.
    From Matt

Highly random PostgreSQL database host

We're working with PostgreSQL very successfully on a moderately large project (approximately 12GB of data in our working set)

Currently we're on a 2GB RAM machine with 7200RPM disks. You can imagine the performance goes to hell quite quickly, even with clustered tables and proper indexes, optimised queries and design, etc. We spend most of our time waiting on I/O, for both read and write operations.

We're putting another 2GB of RAM in the box and a 10kRPM Velociraptor disk but those are just stopgap measures while we work out how to go on from here. The whole set is updated very often, so SSDs are out (too expensive too- this project is being run by two students with no money!), and I'm kind of interested to hear if anyone else has any suggestions for cheap (<£100/mo) server/VPS solutions which would involve 12-16GB of RAM and/or snappy hard disks. Or, even better, an alternative solution to the problem. Are there any hosts who specialise in database hosting?

This is kind of a 'oh god there has to be a better way' post, but the basic gist of this is- are there hosts or solutions available at this cost point, if not why not, and what are the cheap solutions to this sort of problem?

  • It's probably worth double checking that you've tuned the server appropriately for the hardware you're on. Settings to be concerned with include shared_buffers, effective_cache_size, your checkpoint_settings, wal_buffer size, and I'd also make sure your work_mem isnt set too high. There's a wiki page I helped to write which covers most of this stuff, it's a good place to start: http://wiki.postgresql.org/wiki/Tuning%5FYour%5FPostgreSQL%5FServer

    The next question might be to look at the queries that your running to make sure the query plans aren't causing more i/o than necessary. Better queries and improved indexes might help here.

    Oh another thought; you didn't explain what the underlying disks actually were (and/or what you plan to do with the new disks), but it might be worth setting up some type of raid system, or splitting your xlog files to a separate set of disks.

    Personally I don't know of any database hosting for Postgres at that price point that would include deeper investigation of what's going on with your database; most of them would leave that to you.

    JamesHarrison : Tuning we've been working on for months now and we're not entirely convinced we can wring any more performance out of things down that route. Forgot to mention the disks; they're standard 7200 RPM IDE disks. Less than ideal. They are server grade but not SATA or 10kRPM disks. We're getting one additional 10kRPM disk which will be the new home for some of the larger tables. We'd love to set up RAID but we can't; the server was set up nearly 5 years ago without it, and there's close to 600GB of data to worry about on there. Backups are becoming interesting.
    From xzilla
  • Hosting is a no-brainer - check hetzner. I'm using them, my friends are using them, and we are really happy with the offering.

    On the other hand - to give you some perspective - 12GB is definitely not "moderately large" dataset. I would have hard time classifying it even as medium.

    This is not to talk down to you - there are a lot of very important databases that are small. And a lot of big ones that are not that crucial. It all depends on how important the data and/or operations on the data are.

    As for tuning - check what xzilla said, and read about EXPLAIN ANALYZE command/output. This is the single most useful part of PostgreSQL.

    JamesHarrison : Indeed, it's a small database in the grand scheme of things, but it's certainly out of the range of smaller scale stuff. We'll have a look at hetzner, though.
    From depesz

1 Router 2 LANs?

I'm not super keen on the fancy networking terminology to describe my situation, so please help me out with that as I describe my problem in the best terms I can.

Currently, we have a network living on the 10.0.0.0/24 subnet. We just purchased an embedded device that, apparently, has its IP flashed onto it by the manufacturer ( or at least that's how the tech who installed it described it ), and its IP address is 192.168.1.200.

While I'm fully confident there's a semi-nice way to change the IP on this embedded device so that it can live on the same network as our other computers, is there a way I can just get the router to route traffic to this device and achieve the same goal? Currently the network and device are literally on the other side of the planet from me, so performing as few steps possible to get this working would be great.

If this matters, I have a SW24 router from SYSWAN.

Thanks!

UPDATE: I should probably mention an additional constraint: I can't change the IP address of the router so that it lives on the 192.168.1.0/24 network. Or at least, I don't think I can, previously it has caused problems. The two modems both all ready have the IP address of 192.168.1.1, and the SW24 is Load Balancing our Internet between them. Previously when I've tried to give the router an address of 192.168.1.254, it seemed to cause problems with our Internet.

Clarification: With regard to how the router should handle the traffic: I want the 192.168.1.200 host to work as though it were just another device on the 10.0.0.0/24 network. That is, from my 10.0.0.100 device, I should be able to ping 192.168.1.200 successfully, and vice versa.

Update: Here's a picture of how the network is sort-of laid out right now. The two clouds are the two modems, the 10.0.0.1 is the SYSWAN router's private IP address. I'd like for both of the 10.0.0.100 and 10.0.0.101 devices (actually there are a few more on the network, as well, but this is for simplicity of the picture) to be able to communicate with the 192.168.1.200 device. It seems like this is a job for static routes, but that's where I get confused. What kind of static route do I need to add to my router?

lolnetwork

  • Follow SW24 Technical Manual and it will just allow you to change the IP address of the SW24... problem solved.

    Section 2.1 is specifically what you are interested in, and it clearly shows that you can have an address 10.X.X.X

    After looking at your diagram, things get more interesting, and now it all makes a little more sense. You cannot have both subnets 192.168.X.X on both sides of the router, as this is an invalid configuration, what you need to do is either break down the from 192.168.1.1 interface into a much smaller subnet (say 255.255.255.252) and have the 192.168.1.200 broken down into a smaller subnet as well.

    Configure your router with a IP address of 192.168.1.2/255.255.255.252 on the outside pointing to your load balancers, and both 10.X.X.X/255.255.255.0 and 192.168.1.201/255.255.255.252 on the inside with your default route pointing to 192.168.1.1 for its external routing.

    This will allow any of your 10.X.X.X machines to talk to the 192.168.1.200 device via the router, and allow the 192.168.1.200 device to get out to the internet via the default routes on your router.

    mrduclaw : Thanks for the response. Updated question accordingly.
    Stephen Thompson : Then setup another interface on your current router with a subnet of 192.168.1.100/255.255.255.0 and tell your current routers to setup a default route (0.0.0.0/0) to 192.168.1.200. You have to be very careful with terminology used or draw a simple diagram to explain in pictures what you need to have happen to the IP packets.
    mrduclaw : Yes, this looks more like what I'm expecting to see. But how do I go about adding another interface on my router?
    mrduclaw : With regard to the IP packets: I want the `192.168.1.200` host to work as though it were just another device on the `10.0.0.0/24` network. That is, from my `10.0.0.100` device, I should be able to `ping 192.168.1.200`, and vice versa.
    Stephen Thompson : Depends on the router model. I would just setup another IP address (192.168.1.100) on the same physical ethernet port so packets coming in on (10.0.0.0/24) go into the router, and back out the same physical interface, with your router becoming nothing more than a "router on a stick"
    mrduclaw : I'm not sure what you mean by "router on a stick". And I can't seem to find a place to add an additional interface. I think I did something like what you're describing with my router running OpenWRT using virtual interfaces, but this device doesn't seem to have that option.
    Stephen Thompson : Whats the router model, then we can all help you.
    mrduclaw : It's the same router you linked to earlier, the SW24, it's also in my question.
    Stephen Thompson : SW24 doesn't allow you to do split brain routing tables, so you are out of luck I'm sorry to tell you.
    mrduclaw : I can't add a static route to fix this problem?
    mrduclaw : OK, this sounds doable. I just want to clarify, the SW24 is the load balancer. So what should its LAN-side IP address be? We've all ready determined that I can't give it more than one, right?
  • One possible (slightly messy) solution. Put the device on it's own network segment behind a router. Put the WAN side of that router on your 10.0.0.0/24 let's say it gets IP 10.0.0.254 for concreteness but it could be anything. Make the LAN side of that router be 192.168.1.192/26 (which will minimize it's overlap with your load balancer and maybe solve some of your other problems). Now put a static route on your 10.0.0.0/24 device to route 192.168.1.200 or 192.168.1.192/26 (which will allow you to put more devices on your new network, but may cause other problems) to 10.0.0.254. That should do it.

    mrduclaw : I'm not sure I completely followed what you're saying. Put which device on its own network segment behind the router? Can't I just add a static route in the router for the `10.0.0.0/24` to the `192.168.1.200` device?
    Catherine MacInnes : The 192.168.1.200 device cannot sit on the 10.0.0.0/24 network. In order to have a device that is on a different subnet, it needs to have a router. So you need to add a router between the 192.168.200 device and your 10.0.0.0/24 network. Then set a route to that device through that router. Your diagram above will never work because you have a device with an IP address that is not on the same logical network as the physical network it is connected to.
    mrduclaw : OK, just to clarify: so you're saying I need to add an additional router to the picture between my current one and the 192.168.1.200 box, right? And then set the static route though this new router? That makes more sense to me, thanks.
    Catherine MacInnes : Yep. That's what I'm saying.

Network Restore of OS to Non-System Drive - is it possible?

I've got an intriguing goal with an equally intriguing problem to overcome: how to restore an OS to a set of blank disks attached to a running computer. This computer is running the very OS I want to restore to the blank disks, and I when the restore is complete, I want to be able to bring the new disks online as if nothing had happened.

Our current setup:

Windows Server 2003

  • Backup Exec 10d backup server with an accessible backup of the C:\ and shadow copy components of the R2 server
  • Primary DC

Windows Server 2003 R2 server

  • System partition running on software RAID 1 (read:dynamic disks) (C:\)
  • An empty RAID 1 basic, primary NTFS partition running off a hardware controller (E:\)
  • Secondary DC

What I'd like to do

Without disturbing the software RAID partition, restore a recent backup of the R2 server to the partition on the hardware RAID controller and unplug the software RAID partitions, effectively switching from software RAID to hardware RAID. Ideally, the server will boot to the new drive, which should then be the standard C:\, and life will continue as if nothing happened.

Effectively, what I'm trying to do is 'install' an OS from a backup to an empty set of disks by simply restoring a backup to the empty disks, nothing else (seems simple, doesn't it?)

A couple of concerns I have:

  • I don't trust Backup Exec 10d to do things logically, due to past experience: if I elect to restore the shadow copy components of the remote server, will it restore them to the remote server (good), or to the local Backup Exec 10d server (bad)?
  • (if the above is SCC redirect properly, then:) If I elect to redirect the backup, will the shadow copy components be redirected to the new disks as well?
  • All else failing, or because there's a simpler way, what other options do I have?
  • I don't think you're going to get what you want with the method you're proposing. Switching the type of disk controller that Windows 2003 boots from hasn't been tremendously reliable in my experience.

    Backup Exec also isn't going to write out the bootloader bits necessary to make these disks bootable when you restore onto them. You'll end up needing to boot the Windows Server CD and use the recovery console to do a "FIXBOOT" and "FIXMBR".

    I'd opt to demote the Windows Server 2003 R2 machine back to a member server, reinstall the OS (leaving the hardware RAID partition alone), and re-promote. That's going to get you a clean install of Windows that I'd trust to work w/o any future "strangeness" that might result from the strange backup and restore.

    If you do opt to attempt the backup and restore I'd DCPROMO the server back to a member server anyway, just so you don't have troubleshooting Active Directory startup issues to contend with with the rest of the process.

    jobu1324 : Hmm. I'm not worried about the FIXBOOT or FIXMBR problem. What kind of trouble have you had switching disk controllers? Was it just something that an edit of the boot.ini could fix, or was it more involved than that? The recommendation to demote the server before attempting anything is well-taken. Can you foresee any difficulties if we restored from a backup of a demoted server and then re-promoted it?
    Evan Anderson : Since you've got the driver for the hardware RAID already loaded you're on your way. You may get by w/ just an edit to the BOOT.INI and swapping the proper SCSI driver into NTBOOTDD.SYS. Since you're starting from a functional state, so long as you keep your current boot drives unchanged (and, of course, are careful with your data partition), you're safe to give it a try. Frankly, I think Microsoft could be more open in documenting exactly how the Windows boot process works. Moving Linux installs between unlike disk controllers is no sweat, but Windows always gives me the willies.
    jobu1324 : It may be a while before I try this out, so for now I'll vote you up. But if we do move ahead on this, I'll certainly come back here and mark this answered if your suggestions work. Thanks for the help!

Whats the normal Enterprise server configuration who use VMWARE

I am using vmware in my home and i have 8Gb RAM. I was thinking that in Practical or enterprise who hosts the VPS. How much is their configuration in terms of RAM and processor. And usually How many Virtual machines are installed or recommended on One Computer

EDIT:

Server that can support about 50 -60 Virtual Machines

  • The answer to this is really "it depends." I'd say if you took a poll you would see everything from old multi-cpu single core machines with 4-8GB of ram all the way up to 4 way nehalem machines with 256GB of ram. As far VMs per host ... that really depends on the host and how many machines it can handle. I've seen everything from and ESXi box running 1 vm (to get around a hardware compatability issue) all the way up to monster servers hosting 50-60 VMs each.

    EDIT: in response to the question:

    I've seen 50-60 workstation images running on a Dell M600 Blade server with 24GB of ram and the E5506 Proccessor (They were bought before the Hyper threaded models came out)

    We also have about 40 Servers running on the same type of host with 48GB of RAM.

    My friend has a server (don't know the model) at his office that is a four way opteron systems with 196GB of ram. Not sure how many VMs they are running off it but i'd bet it gets up to the 50-60 range

    Master : SO yes Basically i would like to know the configuration of Monster server u are talking about with 50 - 60 VMs
    From Zypher
  • Everything depends on the workload of the VM's - in an average server environment I'm generally aiming for no more than 30VM's per physical server and choose to scale out (ie add more physical boxes at that spec) rather than up (buying fewer more expensive servers) but even that depends on what the customer wants.

    The systems that I've seen running more than 50 (server) VM's are things like the Dell M905 Blades - these take 4x Opteron 83xx (4 Core) or 84xx (6 core CPU's) running at 2.1-2.9Ghz. Total RAM on these can go up to 196GB or Dell's R900\R905 which are 4U rack mounted 4-way systems running either Intel (7400 series Xeon) or AMD (83xx\84xx Opteron) processors, again supporting large amounts of RAM (up to 256GB RAM). These were generally sized to run 50-60 VM's under normal conditions but capable of running double that in extreme situations when some hosts in a cluster are down. A fully configured R905 with top of the line CPU's, fully populated with RAM and with both MS Server 2008 DataCenter Edition and full licenses for ESX 4 Enterprise plus will set you back the best part of $90k list.

    VMware will support up to 25 virtual CPU's per physical core with ESX 4 update 1 so (in theory) you could have 200 VM's on a dual Quad Core system. That would be pushing it a bit too far for Server virtuaization scenarios but for desktop virtualization you can easily see > 100 VM's on a dual socket system that isn't under any particular stress.

    Master : wow that was info i was looking. SO do you backup all that server to something else or take snap shots of all VM daily. I wnat to know to they handle backup of that big server
    Helvick : All the VM's in those environments are stored on large SANs - EMC Symmetrix or similar, in one case an Equallogic group with 11 arrays on the primary side - budget around $250-500k for the main storage. All storage in these environments is aggressively mirrored to a secondary site and the whole environment is covered by something like VMware SRM to enable rapid failover. Environments at that scale target very high service availability but it costs a lot - an R905 configured to the max and licensed for VMware Enterprise Plus and Microsoft Datacenter Edition doesn't leave much change from $80k.
    From Helvick
  • We offer dedicated virtual servers to our larger customers and generally plan on a single core per VM (as many VMs are quite low usage) and an average of 4GB per VM. So the machines we're buying now (HP BL490c G6's) have 16 effective cores so we put 72GB (18x4GB DDR3) inside.

    Oh and we put no more than 6 VM's (at an average of 50GB+4GB-memswap) per 500GB LUN (we use HP XP/EVAs with R1 arrays using 600GB 15krpm FC disks). We only use thin-disks for development environments, where we're happy to increase the VM/host and VM/LUN ratios.

    For internal systems we tend to bank on 2 cores and 8GB per VM.

    Hope this helps in some way.

    From Chopper3

.docx issue in Apache servers

What I have to write to the .htaccess file so that the visitors will be able to download .docx files?

  • Sounds like a browser configuration problem. Make sure your browser is configured to download files of the appropriate content type rather than trying to display them.

    Also, you could try adding the Content-Disposition header, which can be done with the Header directive in Apache: http://httpd.apache.org/docs/2.2/mod/mod%5Fheaders.html#header Here's one explanation of how to use the header: http://support.microsoft.com/kb/260519

  • Are your docx files trying to be run as zip files from the client? Since Office2007 files are essentially just xml files zipped together, Unix treats them like regular zip files, due to their magic numbers. And thus Apache sends the wrong MIME type headers.

    A rather succinct answer for this can be found in another serverfault question: http://serverfault.com/questions/19060/why-are-docx-xlsx-pptx-downloading-from-webserver-as-zip-files

    ilhan : No, they are not run as a zip file but as a text file.
    ilhan : I have already tried the method in the link, it didn't helped.
  • This is an IE problem, but easy to solve in the .htaccess:

     <FilesMatch "\.(?i:docm|docx|xlsx|xlsm|xlsb|pptx|pptm|ppsx)$">
      Header set Pragma private
    </FilesMatch>
    

    Make sure you don't use SSL (https) or iE gives an error

    From Beatniak

How to configure apache as forward proxy server to regex-replace domain names?

For example I want to set up a forward proxy to forward HTTP requests to a.com, b.com, c.com to a.mirror.com, b.mirror.com, c.mirror.com.

Currently I have to configure 3 vhosts as:

host-a: 
  ServerName a.com
  ProxyPass / http://a.mirror.com/
host-b: 
  ServerName b.com
  ProxyPass / http://b.mirror.com/
host-c: 
  ServerName c.com
  ProxyPass / http://c.mirror.com/

Is there any way to rewrite the domain part of http request? as:

ProxyPassMatch http://(.*).com/ http://$1.mirror.com/

I wonder if I must do it by RewriteRules, but I don't know how to write the rule exactly and the performance of RewriteRule v. ProxyPass, but performance isn't a big problem.

  • Depending on the details of your exact setup, this can be done using mod_rewrite. You would probably match on HTTP_HOST, strip out the part of the hostname that you want, and tack it onto .mirror.com, then use mod_rewrite's [P] flag to enable the proxy.

    This isn't exactly what you want, but it might get you a little closer:

    http://httpd.apache.org/docs/2.0/rewrite/rewrite_guide.html#uservhosts

    From muffinista
  • You want something like this:

    RewriteCond %{HTTP_HOST} ^(.*)\.com
    RewriteRule (.*) %{SERVER_PROTOCOL}://%1.mirror.com$1 [P]
    

    (Untested.)

Small Business Solution

Looking for ideas / thoughts on a small office setups. Users : 25 Remote users ; 5 Remote office : 3

I'm a big fan of small business server but looking for mail archiving and NAS storage solutions to separate user data from AD and email.
Look forward to your thoughts, setups. Anyone with hosted solutions experience would also be nice.

Thanks

  • I would prefer iSCSI SAN solutions. Try StarWind software. It has free and trial versions and I'm sure it'll satisfy to all of your requests.

    quack quixote : imho, an iSCSI SAN is overkill for this application... might depend on what exactly the budget is.
    ToreTrygg : What do you mean? souds like you had no experience at all. There is a free version too.
  • I would probably setup some type of samba fileserver with a LVM (OpenBSD, FreeNAS, Debian, Redhat, etc take your pick) along with postfix running a local mail account for each user.

    Assuming you're running Exchange, you can backup email in a clever manner by setting up a forwarding address for each user's mail to [username]@nas.local or something similar thereby creating a constantly updated backup of each message.

    From entens
  • Well, i like Thecus producs. They have linux installed and different plugins available.

    From TiFFolk
  • I posted a comment asking for more information, but with a (loose) understanding of what you're trying to do, I'm recommend the following:

    I'm assuming budget constraints are in the "small business" range, so I'd say get an ALIX-based pfSense firewall from Netgate for each location, setting up IPSec/OpenVPN site-to-site tunnels to one "HQ" -- whatever location has the most on-site users should get the Small Business Server.

    As for separating user data from AD and email, you can move the Profiles, User's Shared Folders, and/or any other network shares to any logical drive -- as long as the server can see it, you can move it there. Same goes with Exchange's database.

    You mentioned email, so I'd recommend setting up Outlook Anywhere to work across the tunnels or perhaps using IMAP; both a little more friendly for higher-latency links like your IPSec tunnels than MAPI profiles.

    No idea what other user applications you have, can't comment on that.

    From gravyface

Apache Virtual Hosts with SSL

Hi,

I have a web server with full root access, which hosts 3 domains. They are on the same IP and managed via VirtualHost files running apache2.

I would like to add SSL capability to one of them, i.e. be able to access the same site via https://example.com

I have tried everything I found online, but most of them result in apache not serving any content at all.

I'd be glad for any help on how to configure my system to support this.

Thanks,

Tuncay

  • You have to add to apache conf

    NameVirtualHost x.x.x.x:443
    

    with your ip, and then the virtual host

    <VirtualHost x.x.x.x:443>
      SSLEngine on
      SSLCertificateFile /etc/apache2/ssl/cert.pem
      SSLCertificateKeyFile /etc/apache2/ssl/key.pem
      SSLCertificateChainFile /etc/apache2/ssl/ca.crt
    
      # ...
    </VirtualHost>
    
    : thank you very much, I had tried this - it hadn't worked before, but I think I also fixed some inconsistency with my other conf files.
  • What you're asking for is impossible. SSL is a separate layer that encapsulates your HTTP session, and it occurs before the HTTP session has begun. At this point, it's not possible for Apache to determine which hostname you are trying to access the server by.

    You can only use IP-based virtual hosts with SSL.

    For more information see this section of the Apache SSL/TLS FAQ.

    Edit: Sorry, I misread your question. I assumed you wanted SSL for all of your domains. However, if you look at the same FAQ, the solution to your question is there as well. You need to explicitly specify the ports for your HTTP based NameVirtualHosts.

    : Thanks, it works now. The only unintended side effect is that: https://myotherdomain.com also goes to https://mydomain.com but I guess I can live with that for now.
    David Zaslavsky : Incidentally, it is *possible* to have multiple SSL-secured named virtual hosts on a single IP address - I do it on my website - but it produces all sorts of warnings in the Apache logs, and certificate warnings in the browser. I certainly wouldn't recommend it for a production site that needs to look clean.
    Kamil Kisiel : I believe you can get away with it if you use wildcard certificates, but that's certainly not a recommended practice. Also, as mentioned in the guide, you can also do it if you run the sites on different ports. eg: https://domain-a:443, https://domain-b:444
    BenM : Actually SSL virtual hosting of multiple domains can be done using the Server Name Identification feature. Later versions of Apache 2.2.12 onwards allow multiple hosts on different domain names on the same IP and port, there are some limitations on which clients support SNI though.

Nginx, Varnish, ESI - Will that work?

I've serveral backends (one is nginx+passenger) to combine via ESI. Since I don't want to go without gzip/deflate and SSL varnish can't do the job out of the box. So I thought about the following setup:

http://img693.imageshack.us/img693/38/esinginx.png

What do you think? overkill?

  • Based on the diagram, I'm not sure exactly what what you're trying to do (what is ESI?). However, there's a small, fast load-balancing front-end server called "pound" and it will handle the SSL layer for you. It could sit alongside Varnish on the front end on port 443 (I assume you have Varnish on port 80?) and pass the SSL traffic directly to nginx (SSL can't be cached anyway, so no point in going through Varnish). Normal, unencrypted traffic would go to Varnish as expected.

    ms : +1 for pointing, that SSL encrypted traffic could be cached properly, because it is encrypted using different keys per connection. Varnish should be placed between nginx frontend server and reverse proxy, where SSL is terminated. But this architecture is more complicated.
  • Do you need varnish at all?

    1. nginx can cache results on disk or in memcached
    2. nginx has SSI
    3. nginx has fair load balancer or ey-balancer
    4. Best practice says that HAProxy before nginx is good move.

    Don't forget about KISS - more components your system has - less stable it becomes.

  • While I haven't personally used it, Nginx does have an ESI plugin:

    http://github.com/taf2/nginx-esi

  • If ESI is an absolute must I'd recommend the following set up

    User -> Nginx (gzip+proxy+ssl termination) -> Varnish (ESI) -> Ngnix App Server.

    That way you don't have to delegate your ssl, gzip requests to one back end server, and the ESI requests to another.

    Have Varnish strip the Accept-Encoding headers from the incoming requests, that way your backends won't try to gzip (iff they're configured to do so), and Varnish can parse your backend response objects for ESI includes. Varnish will then present to your Nginx proxy fully formed content. That leaves the Nginx proxy to do compression and SSL delivery.

    I've got a very similar setup running in production (without the SSL termination), and I've found it works quite gracefully.

    Joris : Then your ESI pages won't be gzipped?
    flungabunga : Yup they do, because Nginx still receives the Accept-Encoding header, it takes the response from the Varnish server (be they ESI's, static, dynamic) and gzips it.

Gigabit ports on Dell Powerconnect 2324 never worked

I have a Dell Powerconnect 2324 with Gigabit ports - 24 ports 10/100 ports with 2 Gigabit ports - that never worked. It's supposed to auto-detect everything, but even trying different combinations I never get any lights or connection on the 2 Gigabit ports. I've always wired my uplink into the normal 10/100 ports (which always works, whatever the configuration of cables). I've seen this problem reported by a couple of other people on the Internet with no solution. Am I missing something obvious?

UPDATE

For some reason the manual doesn't seem to be linked from that page. I only have one other port connected, and I've tried it in several. The Gigabit ports are 10/100 compatible and I would think auto-sensing (not stated explicitly, but says all ports are MDI/MDIX sensing and auto-speed/duplex). If not auto-sensing, then there is no information about how to configure the switch (it is unmanaged).

  • I believe those two gigabit ports are probably shared with port 23 & 24 of the 10/100 ports. Make sure those are empty if you're trying to use the gigabit side.

    xeon : A lot of the lower end switches do this. I have two HP Procurve 1824-24G that share port 23 and 24 with the multi purpose slots.
    David : From the product FAQ at Dell: Are the uplinks on the PowerConnect 2324 switch so-called "combo ports"? No. The Gigabit Ethernet links on the PowerConnect 2324 switch are standalone ports. A full 26 ports of connectivity are available.
    Christopher Karel : Well, then that's definitely not it. Are you comfortable logging into the console of this switch?
    Sam Brightman : I would be, if I knew how.
  • David, looking at the spec sheet you linked, I see that the gigabit ethernet ports are not listed as 'plug and play' or 'autosensing'. So you may have to manually configure them via the console or web admin page.

    A good chance you'll do that with:

    #              configure
    (config)# interface ethernet g1
    (config-if) no shutdown (config-if) duplex full (config-if) speed 1000

    If that doesn't help, try displaying all your interface statuses. This will also help if you don't know the name/number of the gig-E ports.
    # show interface status

    sparks : They could also be gigabit only which would not work when plugged into a 10/100 switch on the other end.
    Sam Brightman : The manual says they are 10/100/1000 and seems to imply auto-sensing. See update - no indication of how to configure it.
    Christopher Karel : OK, I didn't realize this was an unmanaged switch. I know you mentioned different combinations of connections. Does that include different types of devices (PC and/or router, instead of a switch) and different cabling? (crossover and regular) And of course, all the random permutations thereof.
    Sam Brightman : I've tried a few combinations, not 100% sure I've covered them all but think so. All ports are specified in all Dell documents as auto-everything.
  • its a brandname switch, which has to be supported byt the vendor, so why not call Dell and ask?

    Sam Brightman : it's not common that you get a total dud so my first assumption was ignorance on my part. plus my experience with calling is that they tell me to clear my cookies or some other BS. i didn't have time to sort it out when it was new (we weren't using the bandwidth) now it's a few years old, maybe isn't supported. of course i will check if there's no other possibility.
    dyasny : I have worked in Dell Server/Switch support. You get extremely professional people there. This is NOT the local ISP support who will ask you to reboot everything and clear the cookies - this is an experienced sysadmin who will try to help you out, and if he cannot, he will escalate to a very good networking expert. Seriously, the guys in the networking support there are real gurus.
    From dyasny