All posts by Rich

Flex vs CS3 – Filesize concerns (aka Flex SWFs are huge!)

For a while now, I’ve been using FlashDevelop and CS3 exclusively for all of my AS3 development.  However, as my project became more and more complex, CS3 crashes became more frequent.  A few days ago, in fact, CS3 was crashing on every single compilation.  Very strange.

So, I looked into migrating my compilation to the Flex SDK.  After I got everything working, I noticed that my AS3-only (no imported Flex components) file sizes were much larger than those produced by CS3, even when compiled in FlashDevelop’s ‘Release’ mode.

After some tinkering, I discovered that the ‘verbose-stacktraces’ option increases filesize considerably.  If you’re frustrated with large filesizes, ensure that you’re actually compiling with ‘debug’ and ‘verbose-stacktraces’ both set to ‘false’ (or removing the parameters entirely).

(In FlashDevelop, change to ‘Release’ mode using the dropdown on the main toolbar, and turn off ‘verbose-stacktraces’ at Project -> Properties -> Compiler Options -> Verbose Stack Traces. Also, on the same screen, make sure ‘Optimize Bytecode’ is set to ‘True’.)

The Bottom Line

Bad

mxmlc -load-config+=obj\FeaturificSlimConfig.xml -debug=true -incremental=true -optimize=true -verbose-stacktraces=true  -o obj\FeaturificSlim633729054843281250

Good

mxmlc -load-config+=obj\FeaturificSlimConfig.xml -incremental=true -optimize=true -o obj\FeaturificSlim633729054843281250

Make JQuery and Prototype Love Each Other

Hello Googlers – Are you getting the following mind-boggling error on your web app as well?

Here it is in its expanded form within Firebug:

‘Security error” code: “1000’ ? Real helpful, prototype. Thanks.

So, first of all, it will likely take you a very long time to realize that these errors stem from an incompatibility between JQuery and Prototype. Then, it will probabaly take you even longer to find out why this is the case and devise a workaround.

Well, in the interest of saving you loads of time, here’s what’s going on.

JQuery and Prototype both use the ‘$’ function (yes, the dollar sign) extensively. Unfortunately, each framework defines the $ function differently. Also, only one definition for the dollar sign function can exist at a time. So, either JQuery is happy or Prototype is happy but never both simultaneously.

If you’re trying to use JQuery and Prototype at the same time, you might get an error like the one shown above. Or, you might get an error that’s completely different, given certain variables (e.g. which library was loaded first).

The Solution

The solution is easy. The JQuery guys will tell you exactly what you need to do! In our particular scenario, none of the proposed solutions would work. So, we simply edited prototype.js (and scriptaculous.js, effects.js, lightwindow.js, etc) so that instead of calling ‘$()’, they now call ‘$p()’. This allows our JQuery code to execute flawlessly without modification while still allowing Prototype to function in parallel.

Hope something here helps you get rid of Prototype’s worthless error message too!

Text versions of the error message to aid in search engine indexing:

Short version:
Security error” code: “1000
http://www.qgia.com/qgia/lightwindow/javascript/prototype.js
Line 1264

Long version:
Security error” code: “1000
(no name)()common.js.pkg.php (line 271)
(no name)(“Uncaught exception in hook (`onloadhooks’) #0: LinkController is not defined [at line 400 in http://…”, “error”)common.js.pkg.php (line 271)
(no name)()common.js.pkg.php (line 269)
_runHooks(undefined)common.js.pkg.php (line 152)
_onloadHook()common.js.pkg.php (line 148)
(no name)()common.js.pkg.php (line 155)
null, XPathResult.ORDERED_NODE_SNAPSHOT_TYPE, null);

WordPress on OS X Tiger – “error establishing a database connection”

Installing WordPress on OS X Tiger and getting the ambiguous “error establishing a database connection” error?  Fixing this problem was easy for me, once I knew what to do.

The problem is with your php.ini file.  The default socket value is initially blank:

mysql.default_socket =

To get WordPress to work, you need to set a default socket.  Values that have worked in some situations include:

mysql.default_socket = /private/tmp/mysql.sock

mysql.default_socket = /tmp/mysql.sock

In my particular situation (default MySQL and Apache installs on Tiger), I had to use the following socket:

mysql.default_socket = /opt/local/var/run/mysql5/mysqld.sock

For info on locating and modifying your php.ini file, see the Bust Out Solutions blog.

Internet Explorer cannot open the Internet site, Operation aborted

Just a bit of Google-fodder here. One of my Drupal-based sites started generating weird errors in IE recently. “Internet Explorer cannot open the Internet site, Operation aborted”. Whaaa? The error prevented the page from loading and made the page unusable. Lame. Don’t we love IE?

Anyway, I thought I’d add my solution to the hive.  Inspired by reports of conflicting javascripts (notably SWFObject) causing this problem, I realized that the problem started occurring near to the time at which I enabled the “Lightbox” Drupal module.  I disabled the module and everything worked like a charm.  Using defer=”defer” didn’t work reliably since SWFObject does, in fact, write to the body.

For now, this works.  However, it’s not a good long term solution… Any thoughts on how to get SWFObject and Lightbox to play nicely together?

 

scRUBYt! and WWW::Mechanize foiled (aka Sneaky Yahoo Scraping Prevention)

Screen scraping. It’s one of those techniques that can be loads of fun and also heaps of frustration. On a recent project, I was charged with scraping some pages from Yahoo! Shopping.

I Hate Screen Scraping

After coding up a quick skeleton, I was surprised to see that none of my initial tests were working – it was almost as if the data wasn’t even there. Checking out the rendered page in Firefox revealed the problem – none of the links I needed to scrape were present. At all. So, I hit up the Yahoo source code to verify that the links were in the original. Yup, right there. Weird!

I tried out a variety of my favorite scraping tools – scRUBYt!, WWW::Mechanize, and even good ol’ cURL. None of these tools could acquire HTML source from the Yahoo server with links intact, even when I provided a valid Firefox User-Agent string.

Next, I dropped down to an even lower level – packet sniffing with my favorite sniffer, Packetyzer, and sending HTTP 1.1 requests directly via telnet.

rich@redbuntu:~$ telnet shopping.yahoo.com 80
Trying 209.73.163.95...
Connected to pdb3.shop.yahoo.akadns.net.
Escape character is '^]'.
GET / HTTP/1.1
Host:shopping.yahoo.com
 
HTTP/1.1 200 OK
Date: Tue, 15 Apr 2008 17:48:16 GMT
P3P: policyref="http://p3p.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE GOV"
Cache-Control: private
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
 
a17e
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
etc...

Three interesting things jumped out at me when I executed this request. FIrst, I didn’t recognize the P3P header – turns out this is just a compact privacy policy, so no dice there. Second, I noticed the absence of the Server header, which is usually used to identify the server’s software, version, installed modules, etc. Since Yahoo opted to hide this line, it seems increasingly possible that Yahoo is ‘gaming’ us. (And a only shows that they’re running FreeBSD with an unidentified web server) Third, I noticed that the response was being sent uncompressed. This made sense since I did not specify in my HTTP 1.1 request that my ‘client’ (telnet) supported compression. However, while sniffing packets earlier, I noticed that all of the HTTP responses were compressed. (Edit: Another interesting feature – “a17e”. What in the world is this?)

Those Sneaky Geeks

It sounded crazy, but I realized that Yahoo could be using clients’ compression support to differentiate between bots and actual web browsers. So, I popped open another terminal and tried making a request as I did before with cURL, except with a Firefox User-Agent string and compression enabled.

curl --user-agent "Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.5) Gecko/20070713 FireFox/2.0.0.5" --compressed "http://shopping.yahoo.com/path/to/resource" > result.html

Bingo! All of the links were intact – the source was identical to that which a standard web browser would retrieve. So, it seems that Yahoo has written some sort of Apache module or perhaps just engineered their application code to vary its response according to whether or not the client supports compression. This is quite a sneaky way to deter search engine indexing and screen scraping in general, but it works wonderfully. In fact, if I were the engineer that devised this detection method, I’d be pretty pround of myself. Unfortunately, now that we know Yahoo’s secret, our scraping workflow just gets one more step: curl > result.html.