Use Windows Authentication for SSMS as Domain User from Non-Domain Account

February 26th, 2011 1 comment

For those that work with Active Directory environments, but (for whatever reason) prefer to keep their computer off the domain (consultants — that’s probably you!), you DO have the option to run SQL Server Management Studio using Windows Authentication from your non-domain user account. I only recently discovered this, and I can’t believe it took me this long to find it.

SQL Server Management Studio 2008:
runas.exe /netonly /user:MYDOMAIN\myuser "C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe"

SQL Server Management Studio 2005 Express:
runas.exe /netonly /user:MYDOMAIN\myuser "C:\Program Files (x86)\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\ssmsee.exe"

You get the idea :)

Now when you run Management Studio, you can select ‘Windows Authentication’ to login to your target SQL Server using your domain credentials. No need to use RDP to remote to your SQL Server or running it in a Domain-enabled Virtual Machine!

Exclude Squid Cache from updatedb.mlocat

February 15th, 2011 No comments

One of the Squid servers in a cluster I manage had an unusually high load, while the rest of the Squid servers with a relatively equal amount of connections were within acceptable load levels.

While reviewing the process list, I found that updatedb.mlocat was running during the busiest time of the day. Turns out it is enabled by default in cron.daily on Debian Lenny, and it was using a noticeable amount of CPU resources. Wtf?

It was attempting to index all the files added to the Squid repository. On these servers, I have a separate partition for Squid in /mnt/squid/, and so UpdateDB can’t tell if it’s supposed to keep track of all these files alongside everything else in the system. Luckily, there is an easy solution.

Exclude your Squid cache location via the “PRUNEPATHS” variable in /etc/updatedb.conf (in my case it was /mnt/squid)

I added my Squid Cache location to PRUNEPATHS, then the process dropped off the radar, and server load returned to acceptable levels.

Normalize Accept-Encoding Headers via HAProxy for Squid Cache

April 10th, 2010 No comments

I noticed that when browsing one of our heavy traffic website’s under different browsers, I would see completely separate versions of a page depending on which browser I used. We use a cluster of Squid servers to cache the many pages of this particular website, but I believed the only thing that needed to happen was to configure the HAProxy load balancer to balance requests based on URI so each page would always be served by the same Squid server (to optimize hit rate). Apparently not so!

So… what’s the deal?

Apparently, the Vary: Encoding header passed by Squid, in response to the browser requests, is used to make sure browsers receive only pages with encoding, if any, that they support. So, for example, Firefox would tell Squid that it accepts “gzip,deflate” as encoding methods, while Chrome was telling Squid it accepts “gzip,deflate,sdch”. Internet Explorer was only different from Firefox by a single space (“gzip, deflate”), but that was enough for Squid to cache and serve a completely separate object. I felt like this was a waste of resources. The “Big 3″ browsers all support gzip and deflate (among other things), so in order to further optimize performance (and hit rate), I decided to normalize the Accept-Encoding headers.

My first attempt was to do something on the Squid servers themselves. However, the “hdr_replace” function I looked into did not have any impact on how Squid actually handles those requests, so I could hdr_replace all day long and each browser would still see separate cache objects for each individual page.

The alternative was to use HAProxy, but it turns out this works well! After a bit of reading, I found the “reqirep” function that allows one to rewrite any header passed down through the back-end servers. It uses regular expressions to do the deed, and after some testing/serverfaulting/luck I ended up adding the following command to the HAProxy backend configuration:

reqirep ^Accept-Encoding:\ gzip[,]*[\ ]*deflate.* Accept-Encoding:\ gzip,deflate

This regex will match for IE (gzip, deflate), Chrome (gzip,deflate,sdhc), and Firefox (gzip,deflate). It should also be noted that Googlebot uses the same header as Firefox. I didn’t bother looking into other browsers, as I was most concerned with the major browsers. If someone wants to contribute a better Regex to get the job done, let me know!

An additional reason normalized Accept-Encoding headers are good is that your cache will be primed much more quickly if everybody that visits your website are retrieving the same version of a page. Less work on your web server farm, database servers, and all that.

Speed is good. Good luck!

Google Search Gets a Facelift

March 22nd, 2010 No comments

I was trying to be productive and did a quick Google search when I noticed that the SERPs (search engine results pages) had just gotten a facelift. Apparently not everyone is seeing this yet, so Google is probably slow-releasing these changes, but here is a preview!

Screen-shot #1 shows the new, clean header with side navigation:

This second screen-shot shows the updated (and more colorful) pagination:

I like it. It appears that location-based search will play a larger role in upcoming search results, as the currently set location is directly under the search box.

Sorry, China, but you probably won’t get to see the new style anytime soon. :/

Categories: Cool, General, Web Design Tags: ,

How to Install SQL Server Management Studio 2005 on Vista 64-bit

March 22nd, 2010 3 comments

If you have tried to install the 64-bit version of SQL Server Management Studio on Vista 64-bit, then you have probably run into the following error:

Product: Microsoft SQL Server Management Studio Express — The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 29506.

The problem is that User Access Controls (UAC) interferes with the installation process and causes it to fail. In order to get around the UAC issue, follow these few simple steps:

  1. Save the SQL Server Management Studio MSI file to, say, “C:\temp”
  2. Go to Start > All Programs > Accessories
  3. Right-click on “Command Prompt”, and click “Run as Administrator”
  4. Once you open the command prompt, browse to “C:\temp”
  5. Run the MSI file by typing in the name of the file and hitting Enter

The installation will run using the administrator privileges inherited by the command prompt.

You should now be free of any error code 29506. Good luck!

Delete Duplicate Rows/Records in MySQL Table

March 22nd, 2010 2 comments

Most articles on removing duplicate rows from a MySQL table involve 3 steps, but the following query is what I use for purging dupe records in one simple query.

DELETE FROM `myTable` WHERE id NOT IN (SELECT t1.id FROM (SELECT id, groupByColumn FROM `myTable` ORDER BY id DESC) as t1 GROUP BY t1.groupByColumn)

– “myTable” is the name of the table with duplicate rows
– “id” is the name of the primary key identifier in “myTable”
– “groupByColumn” is the name of the column used to differentiate records as duplicates

Example: Table of Videos with the duplicate match being made on the “title” field.

DELETE FROM `videos` WHERE id NOT IN (SELECT t1.id FROM (SELECT id, title FROM `videos` ORDER BY id DESC) as t1 GROUP BY t1.title)

It’s a good SQL query to save or bookmark for those times when you need do some maintenance or cleanup during development.

Categories: Database Servers, General, MySQL Tags: ,

Using the ACL in HAProxy for Load Balancing Named Virtual Hosts

September 18th, 2009 5 comments

Until recently, I wasn’t aware of the ACL system in HAProxy, but once I found it I realized that I have been missing a very important part of load balancing with HAProxy!

While the full configuration settings available for the ACL are listed in the configuration doc, the below example includes the basics that you’ll need to build an HAProxy load balancer that supports multiple host headers.

Here is a quick example haproxy configuration file that uses ACLs:

global
    log 127.0.0.1 local0
    log 127.0.0.1 local1 notice
    maxconn 4096
    user haproxy
    group haproxy
    daemon

defaults
    log global
    mode http
    option httplog
    option dontlognull
    retries 3
    option redispatch
    maxconn 2000
    contimeout 5000
    clitimeout 50000
    srvtimeout 50000

frontend http-in
    bind *:80
    acl is_www_example_com hdr_end(host) -i example.com
    acl is_www_domain_com hdr_end(host) -i domain.com
    
    use_backend www_example_com if is_www_example_com
    use_backend www_domain_com if is_www_domain_com
    default_backend www_example_com

backend www_example_com
    balance roundrobin
    cookie SERVERID insert nocache indirect
    option httpchk HEAD /check.txt HTTP/1.0
    option httpclose
    option forwardfor
    server Server1 10.1.1.1:80 cookie Server1
    server Server2 10.1.1.2:80 cookie Server2

backend www_domain_com
    balance roundrobin
    cookie SERVERID insert nocache indirect
    option httpchk HEAD /check.txt HTTP/1.0
    option httpclose
    option forwardfor
    server Server1 192.168.5.1:80 cookie Server1
    server Server2 192.168.5.2:80 cookie Server2

In HAProxy 1.3, the ACL rules are placed in a “frontend” and (depending on the logic) the request is proxied through to any number of “backends”. You’ll notice in our frontend entitled “http-in” that I’m checking the host header using the hdr_end feature. This feature performs a simple check on the host header to see if it ends with the provided argument.

You can find the rest of the Layer 7 matching options by searching for “7.5.3. Matching at Layer 7″ in the configuration doc I linked to above. A few of the options I didn’t use but you might find useful are path_beg, path_end, path_sub, path_reg, url_beg, url_end, url_sub, and url_reg. The *_reg commands allow you to perform RegEx matching on the url/path, but there is the usual performance consideration you need to make for RegEx (especially since this is a load balancer).

The first “use_backend” that matches a request will be used, and if none are matched, then HAProxy will use the “default_backend”. You can also combine ACL rules in the “use_backend” statements to match one or more rules. See the configuration doc for more helpful info.

If you’re looking to use HAProxy with SSL, that requires a different approach, and I’ll blog about that soon.

Generate a PKCS #12 (PFX) Certificate from Win32 CryptoAPI PRIVATEKEYBLOB

July 30th, 2009 No comments

We had an accounting system that used a Microsoft Win32 CryptoAPI blob to encrypt/decrypt credit card information for recurring customer information. It was time for an upgrade to .NET land. Keith, the lead developer for this project, decided it would be beneficial to switch to x509 certificates for improved key management (and I wasn’t going to argue).

So what we physically used to encrypt/decrypt cards in the legacy system was a PRIVATEKEYBLOB and our ultimate goal was to use a certificate in the PKCS #12 format. My system at the office is Windows XP, and I wanted to use OpenSSL to accomplish the task of converting the private key blob to something more suitable for our new system, but I didn’t want to transmit any of our top secret keys across the VPN or even across the network for that matter.

OpenSSL did not begin supporting PRIVATEKEYBLOB as an acceptable format until 1.0.0 Beta, but 0.9.8h was the only Windows binary readily available. So I grabbed the OpenSSL source (here) and compiled it using GCC within Cygwin. If you don’t have Cygwin (get it here), it’s very easy to get started, and you can select from a large variety of Linux packages during setup. So, during setup, look for GCC and make sure you enable it.

Here’s how to compile OpenSSL 1.0.0 Beta on your native Linux environment or with Cygwin:

[code lang=”bash”]$> cd /usr/local/
$> wget http://www.openssl.org/source/openssl-1.0.0-beta3.tar.gz
$> tar -xzf openssl-1.0.0-beta3.tar.gz
$> cd openssl
$> ./config && make && make install && make clean[/code]

If something broke during install, check the online docs, or re-run Cygwin setup to make sure you selected the gcc toolset. I’ll assume from this point forward you are using OpenSSL 1.0 in either a native Linux or a Cygwin environment. If you aren’t sure, start OpenSSL and type “version” to check your ::drumroll please:: version number.

Let’s get started.

The OpenSSL command below will take your PRIVATEKEYBLOB and output an RSA private key in PEM format. Please note the use of “MS\ PRIVATEKEYBLOB” instead of the alternative “PRIVATEKEYBLOB”. Backspace is required to escape the blank space after “MS” in Linux when passed as a parameter on the command line. So, if all goes well, you should have a PEM file. If it doesn’t, try specifying a different input form (e.g. DER or PRIVATEKEYBLOB instead of MS\ PRIVATEKEYBLOB).

[code lang=”bash”]$> openssl rsa -inform MS\ PRIVATEKEYBLOB -outform PEM -in private.pvk -out private.pem[/code]

Now that we have a PEM file with an RSA private key, we can generate a new certificate based on that private key (command below). This will generate an x509 certificate valid for 5 years. Once you run this you’ll be prompted with the usual country/state/city/company information, but what you specify there is up to you. I would recommend adding a passkey when it prompts you at the end

[code lang=”bash”]$> openssl req -new -x509 -key private.pem -out newcert.crt -days 1825[/code]

If all continues to go well, you should have a private key in PEM format and your brand new certificate. One last command is needed to generate the PKCS #12 (aka PFX) certificate bundle.

[code lang=”bash”]$> openssl pkcs12 -export -in newcert.crt -inkey private.pem -name “My Certificate” -out myCert.p12[/code]

If you didn’t receive any errors, then congratulations! You can now import this PKCS12 bundle into any Windows certificate repository and no longer need to hard code blobs into your code.

Hope this helps save someone a few hours time.

How to Copy or Move a Joomla Website

June 19th, 2009 1 comment

Intro

If you manage one or more Joomla websites, eventually you’ll have to move them elsewhere. It’s pretty much fact. Performance requirements will change, you’ll find better pricing elsewhere, your dedicated server died, etc.

There are a few options most people have to choose from:

  • FTP client and phpMyAdmin method (aka the long, boring method)
  • SSH/Shell method (aka the cool, quick method)
  • PHP system() method (aka middle of the road and kind of fun method)
  • Joomla “clone” component (your mom could do it)

Move Joomla with an FTP Client and phpMyAdmin

  1. Download the entire Joomla website via FTP client (you’re using S-FTP to connect, right?)
  2. Use phpMyAdmin to export a SQL dump of your database
  3. Upload the entire Joomla website via FTP client
  4. Use phpMyAdmin on the new server to import the SQL dump from the old website
  5. Update configuration.php:
    1. Update the MySQL database credentials
    2. Update the tmp/logs path
    3. If you use FTP Layer, update the credentials
  6. Update .htaccess to match any changed server requirements

Easy and straightforward. Long, slow process, but any Jr. Network Admin  could handle this for you if you don’t want to get your hands dirty.

Clone Joomla with SSH (shell) Access

  1. Login to your server via SSH
  2. Browse to your Joomla website root
  3. Run these commands:
    [code lang=”bash”]tar -czf ../backup-example-com-20090619.tar.gz .

    mv ../backup-example-com-20090619.tar.gz ./

    mysqldump -u yourUsername -p -h yourMySQLHostname yourDatabaseName > backup-example-com-20090619.sql[/code]

  4. Do you need to move this to a remote server or another location on the same server?
    1. Local Path
      1. Copy both backup files to the new website root
      2. Browse to the new Joomla website root
    2. Remote Path
      1. Login to remote server via SSH
      2. Browse to the new Joomla website root
      3. Use wget to download the archive and SQL dump to this server:
        [code lang=”bash”]wget http://www.example.com/backup-example-com-20090619.tar.gz
        wget http://www.example.com/backup-example-com-20090619.sql[/code]
  5. Run this command:
    [code lang=”bash”]tar -xzf backup-example-com-20090619.tar.gz[/code]
  6. Run this command (assuming you have made a new, blank database)
    [code lang=”bash”]mysql -u yourNewUsername -p -h yourNewMySQLHost yourNewDatabase  < backup-example-com-20090619.sql[/code]
  7. Update configuration.php & .htaccess as shown in the first example

More complicated (obviously), but if you like doing things the hard fun way, then it’s a great way to go.

Using PHP’s system() or back-tick Commands to Copy Joomla Website

I wasn’t made aware of this method until after I started managing a whole slew of websites in a cloud hosting platform (Scale My Site). Cloud hosting (and many shared hosting platforms) do not provide access to SSH because it’s simply not feasible. Cloud hosting in particular due to your website running across hundreds of different server nodes. You can perform the same functions as the SSH procedure above using system execution commands in PHP.

  1. Create a new file called copy-me.php
  2. Write the following code into this file:
    [code lang=”php”]echo `tar -czf ../backup-example-com-20090619.tar.gz . && mv ../backup-example-com-20090619.tar.gz ./`;
    echo `mysqldump -u yourUsername -p -h yourMySQLHostname yourDatabaseName > backup-example-com-20090619.sql`;[/code]
  3. Execute the PHP file by accessing it from a browser:
    Browser> http://www.example.com/copy-me.php
  4. Create a new file on the destination website called update-me.php
  5. Write the following code into this file:
    [code lang=”php”]echo `wget http://www.example.com/backup-example-com-20090619.tar.gz`;
    echo `wget http://www.example.com/backup-example-com-20090619.sql`;
    echo `tar -xzf backup-example-com-20090619.tar.gz`;
    echo `mysql -u yourNewUsername -p -h yourNewMySQLHost yourNewDatabase  < backup-example-com-20090619.sql`;[/code]
  6. Execute the PHP file by accessing it from your browser:
    Browser> http://www.exampleDestination.com/update-me.php
  7. Update the configuration.php and .htaccess files as needed

Cool, huh?

Use a Backup or Clone Component from Joomla Extension Directory

If you prefer not to do anything yourself, and want to keep it as simple as possible, then a backup component from the JED is the way to go:

http://extensions.joomla.org/extensions/access-&-security/backup

I have only used one of those components before, and I found that there were a few bugs needing to be worked out, and it ended up taking more time to do the backup, move, and clone that I needed to do than when I did so manually.

Foreward

There are shortcuts you can take here depending on your environment. For instance, you never need to create archives at all, as you can pipe the mysqldump output directly to another mysql command (with the new database’s credentials). However, I prefer to use archives and solid files especially when using PHP-based method, because you could end up accidentaly accessing the cloner file and wiping an existing MySQL database (if you aren’t careful). So, on top of all this, I’d recommend removing the update-me and copy-me files after using them.

Improve Scalability and Performance in ASP.NET Apps

March 17th, 2009 No comments

If you’re looking to scale out ASP.NET applications, here is an interesting article that goes into length on important aspects to improve performance for .NET applications in high-traffic environments.

    The article covers the following topics:

  • Optimizing the ASP.NET pipeline system
  • ASP.NET, AJAX caching, and what you need to know
  • Deploying ASP.NET from a staging to a production environment
  • Optimizing the ASP.NET process configuration
  • Using a Content Delivery Network (CDN) with your ASP.NET apps
  • ASP.NET 2.0 Membership tables
  • Progressive UI loading for a smoother, end-user, browser experience
  • Optimizing ASP.NET 2.0 Profile provider

Original article: Performance & Scalability in ASP.NET