tekx – Internet Strategy Guide https://phpprotip.com Together we can defeat the internet Tue, 07 Mar 2017 02:01:20 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.29 55205001 Copyright © Internet Strategy Guide 2013 chance@chancegarcia.com (Internet Strategy Guide) chance@chancegarcia.com (Internet Strategy Guide) http://phpprotip.com/wp-content/plugins/podpress/images/powered_by_podpress.jpg Internet Strategy Guide https://phpprotip.com 144 144 Together we can defeat the internet Internet Strategy Guide Internet Strategy Guide chance@chancegarcia.com no no tekx – my thoughts https://phpprotip.com/2010/06/tekx-my-thoughts/ https://phpprotip.com/2010/06/tekx-my-thoughts/#comments Tue, 01 Jun 2010 14:56:30 +0000 http://phpprotip.com/?p=359 It’s been about a week or so since tekx ended and I figured that (since I didn’t do one last year) I should put my own personal thoughts down. This was my second year at tekx and I was looking forward to the conference and it did not disappoint my expectations. Oh yeah, I’m going to write like my live blogs were and do mostly stream of thought with as little editting as possible. Want the TLDR version? It was fucking awesome, you should berate yourself for not finding a way to go and resolve to be there next year.

So last year at tek, I twittered my thoughts/notes on the sessions. I twittered so much, that only @spooons and I seemed to be in competition for most tweets tweeted at tek (that last snippet could almost be a tongue twister). I had meant to compile those thoughts into more formal posts/notes. That didn’t work out as planned. This year, I saw a post in the attendees google group about the use of recording devices in sessions which lead me to purchase a Flip MinoHD with the full intent of having something to barter for other people’s recorded sessions. It wasn’t until the tutorials and first session that I realized I just had a flimsy excuse to by myself a HD recording device. The tutorials were too long to record and the first session made me realize I’d never get good positioning for the camera to have a decent recording. This is when I thought up what turned out to be the best idea ever…I wrote my notes on my blog and posted them decided to do live blogs about the experience. Go ahead and look at the other posts if you want my specific thoughts on each tutorial and session. This post is some additional after-thoughts and what-not.

Day 1: the arrival

I got in later than I thought I would for check-in due to a combination of weather and traffic. It was amazing how familiar and less nerve-wracking the drive up is when you know some people who’ll be there early and are less intimidated by the prospect of getting lost. After decompressing from the trip (and missing dinner with @frozensolidone and some others going to Giordano’s, I ended up joining @elazar, @derickr, @sweatje, and a bunch of other people at Shoeless Joe’s (aka Pantless Pete’s, Thongless Tina’s, Braless Betty’s…). We ate, drank and had all manner of tek fun. Some of us even made plans for Lost the next night.

Day 2: I’s gunna lern me sumting

Among other things I learned at the tutorials, was how to spell properly. >.>

Also learned that your brain can explode from too much awesome but that, in itself is awesome. The tutorials I attended were Bad Guy For A Day and Best Practices.

Black first-generation iPod Nano.
Image via Wikipedia

Arne‘s talk was about security and showed us how to view our sites in the way a hacker would. He also went over social engineering practices and a whole bunch of things I’ve talked about already. The best part was when he just got done talking about how easy it is to steal information with a USB stick by letting people’s natural curiousity cause them to plug the USB stick in their computer and letting whatever nasty thing you have there do its job from there. He of course proceeds to hand the stick he held up to demonstrate his point and saying that the source we’ll be going over will for the rest of the tutorial. One thing I didn’t put in my live blog that was going through my head was that the USB attack isn’t limited to just flash storage drives. I remember back in 2005-2006 that one could perform the attack with an iPod. Since the iPods back then were able to double as mass storage device, it wasn’t tough to find a kind soul that would let you “charge your iPod” (maybe i shouldn’t quote that. makes it seem like a euphemism for sex and not a hacker attack) on their computer. I had tried it out by loading the necessary things onto a iPod Nano I had and testing it on my personal machines. I never did get to try to use a friend as a proper guinea pig. Anyways, the point of that rambling was that Arne might want to use something other than the USB stick that has the tutorial source as a visual reference. =D

As awesome as Arne‘s tutorial was, it was the Best Practices tutorial is one of the things that made the trip worth every single penny and more. First point of awesome is the amount of varying topics they managed to cover was mind blowing. The fact that Matthew Weier O’Phinney (@weierophinney) and Lorna Jane Mitchell (@lornajane) were presenting was another point of awesome. I couldn’t begin to espouse the level of respect I had for these two before this tutorial and the tutorial itself has pretty much ascended them to godhood in my eyes. I had feared this tutorial would be a bad choice since I’m familiar with and practice a lot of the best practices they covered. They kept my interest on topics I should’ve been going “been there, done that” and when it came to things I wasn’t familiar with, they gave a lot of food for thought.

Since this was a Lost night (and the night before classes), @frozensolid, @xiian and I ended up watching Lost and geeking out instead of going out.

Day 3: it has begun

For a lot of people Day 3 is Day 1. I feel bad for those people, they miss out on a lot. Let us take a moment to mourn the awesomeness they lost.

The opening keynote was given by Josh Holmes. The sheer amount of disdain, hatred and anger Microsoft has caused me to have over the past decade made me leery of the keynote. On the one hand, they were (according to all I heard) a major sponsor this year and on the other, they are responsible some of the most bloated pieces of shit in the computing world. So when I read the title, I simultaneously WTF’d and giggled.

Image representing Microsoft as depicted in Cr...
Image via CrunchBase

This may become a rant so the TLDR version is: Josh Holmes managed to win me over. I’m not on the Dark Side but I do see Windows as more than a bloated game console. I think Cal Evan’s post on Microsoft summarizes things best.

With that said, I will carry on with the my thoughts on the keynote. To be brutally honest, Josh Holmes rubbed me the wrong way at first. Here’s this guy demanding that we be enthusiastic in our answers to his questions. I haven’t finished my Red Bull yet, I’ll be energetic when I damn well feel like it thank you very much. I think if I wasn’t intent on making unbiased notes that could be used by others, I would’ve tuned out the rest of what he had to say and that would’ve been a major loss on my part. For all the jokes that I want to make about the contrast of a guy from the company who made Word trying to tell us to keep things simple, he touched upon a lot of what is wrong/tough about enterprise development. Hell, what can sometimes be wrong about development in general. Sometimes in our narcissistic need for a beautifully engineered solution, we over-engineer things. I’ve seen this happen more often in large companies but sometimes you see it in smaller shops. You have someone who needs a hammer but instead you make them a Gnomish Army Knife. We shouldn’t add complexity to a simple solution. We need to understand have to understand what our frameworks are doing in order to understand if they truly are the solution to the problem at hand or if we need to use a different tool. We need to concentrate on prioritizing features by ROI, look at usability and test, test, test. These were all sentiments I can stand behind. I’m perfectly happy to continue using OSX at home and Ubuntu at work but Josh’s keynote has opened my mind to considering windows deployments. The man did his job well. I say if he can crack the armor of a curmudgeon like me, they should probably give him a raise (totally not sucking up since he’s now following me on twitter. >.> <.< ^^). Seriously though, if part of his job is to turn over or chip away at the “FUCK Microsoft” crowd, then he is doing the job well.

The Zend_Form session given by @akrabat was another thing that paid for my trip. The decorators are something I have been struggling with for quite a while. I later ran into him and was flattered by his compliment on my write-up. He even accused me of saying a lot more than he did (or something to that affect), which I can only say that he gave me a lot to work with and he said a lot more than he probably thought he did.

Derick Rethans is truly the master of time and space. He completely obliterated my mind with his session. I couldn’t even do a decent write-up in there. In fact, I think I’m still recovering. The longest day of tekx got even better as I went to @tswicegood’s git session. I’ve been debating switching the version control we use at work from Bazaar (bzr) to git. I don’t see the switch anytime soon but did see some hotness beyond offline commit.

Eli White’s presentation on Code & Release Management was awesome. Not only because I’ve been trying to streamline my code & release management systems but also because Eli kicks ass as a speaker. I really wish that I could’ve made his earlier talk but it conflicted with the Zend_Form talk. I think I may have to do a separate tekx inspired post on this talk, it might help me with applying what I learned from it. People should bug me if I don’t have something up in a couple weeks (next week is personally busy otherwise I’d say next week. gonna need at least 2 weeks).

The last session of the day for me was @s_bergmann’s talk on continuous integration and inspection. This was a nice complimentary session to have after @eliw’s presentation and after the tutorial from the day before. I’m surprised that my head didn’t explode from so much Best Practices being thrown at me in so little time to process.

I honestly don’t remember what happened that night. Must’ve been really kick ass though. Or my mind is overloaded with so much awesome from the whole tek experience that it doesn’t want to taint it with memories…

Day 4: I want to be a rock star

No, I’m not talking about being that kind of rock star but about Rock Band (which we’ll talk about later). In retrospect, I don’t know what I was thinking. After a few mind-blowing days of tek, I somehow thought it would be a good idea to obliterate what brain cells I had left with a dose of more high level sessions. The day before I at least had some previous experience to bind to what they were talking about. Today, yeah…not so much.

I started out with Derick Rethan’s talk on Xdebug. Even though most of the talk was an overview on Xdebug and how it can be used in your development process, there are 2 things to keep in mind:

  1. The talk is given by the creator of Xdebug
  2. That creator is Derick Rethans. Maybe it’s  just me, but the man is intellectually intimidating. He could recite a Beavis and Butthead script and make me feel like I’m missing something blatantly obvious.

After that I was off to Elazar’s talk on new SPL features in PHP 5.3. The talk was awesome, even though I could only keep up with parts of it. The crayon analogy to sets was so clever I didn’t get it until he explained it to me. Can’t think of a easier visual though.

@auroraeosrose‘s talk on streams, sockets and filters gave me a lot to think about. I’ve only vaguely worked with them before but I feel that her talk has prepared me for when/if I need to do extensive work with them.

I ended the day’s sessions by going to @lig’s talk on scalability and mysql. I somehow missed the part at the beginning where she said the talk would be at an intermediate to advance level of knowledge of using MySQL. This was good and bad for me since I hope that I might be approaching the intermediate level and very much consider myself a n00b. The good news is that I found out about a whole lot of hotness that’s in MySQL 5.5. The bad news that about halfway through the talk my brain effectively called me out on the overloading and decided to shutdown. That half of the session, I was effectively going “bwah? how do I make a note on that. I’m pretty sure she said some words there.”

After a thoroughly exhausting day, I went with a group to Giordano’s to get pizza. OMG awesome pizza. I want to move to Chicago and nom that pizza. The next part of the tek adventure was Rock Band night (told you I’d get to this part). I forget what may have originally been planned for the evening activities, but I do know that @elazar and I did conspire to shape it into Rock Band. Ok, we were going for karaoke but then it turned into Rock Band. Microsoft was gracious enough to help the endeavor by providing some of the necessities (360 and the Rock Band 2 band set). I was in one of the 2 bands in the ultimate showdown. We unfortunately lost due to unnamed, *cough* Tom *cough*, reasons. The coolest thing of the evening was that the whole Rock Band package was given out to the top band to split amongst themselves. Grats to @zburnham, @frozensolidone and @engyma (if I recall correctly) on winning. I did manage to show off some drum skills when the set was properly calibrated. I also met a cool dude from Maryland named Mike that plays real life guitar. I gave him my information for him to keep in touch but unfortunately did not get his before leaving tek. Sad Panda. The night’s festivities continued on to more rocking out at @eliw’s room. I managed to use my Flip MinoHD and get some recordings but since then haven’t gotten to upload them. Maybe I’ll find time this week. @rdohms provided some refreshments for the festivities and much awesomeness was had that night.

Day 5: closing time

I tried to go to Ben Ramsey’s talk on memcache and apc but only made it partially through because I was sneezing up a storm and felt miserable. I spent part of that session and the next recouping. Sadly I missed @lornajane’s Open Source Your Career session which I heard was chock full of asskickery. I did however make @auroraeorose’s talk on cross platform php. The talk was highly informative and entertaining. In fact, in addition to Josh Holme’s speech, this was a deciding factor in my conclusion that Microsoft showing that they’re not only playing the game but playing to win. Windows development environments are becoming less of a joke and I’d be dumb not to look into the feasibility or at least keep an eye on things. Marco’s closing remarks were short and sweet where he encouraged us to give back to the community and that user groups are the lifeblood of the community. I swallowed that Kool-aid and ended up starting up a user group the following week. @dragonbe, @caseysoftware and a bunch of others were kind enough to give me some tips on starting one up since I missed the community panel in favor of checking out the cross platform php session.

Some people left right after and got hit by the @eliw curse. I elected to stay because last year leaving after, I got hit by the ground travel version of the curse. Needless to say, staying for the extra night was worthwhile in general. I got to go see Iron Man 2 with @dragonbe and @rdohms. We introduced Mountain Dew to @dragonbe, which he apparently enjoyed. As we got back, we ran into some of the tek stragglers and went to Shoeless Joe’s. The next day @rdohms and I went and got some IHOP food. I was meaning to get there for breakfast all week. If you’ve actually read this far, I commend you. Did I mention that I babble? Maybe I should mention that at the beginning when I do next years writeup.

THE END?

Reblog this post [with Zemanta]
]]>
https://phpprotip.com/2010/06/tekx-my-thoughts/feed/ 3 359
tek x- closing remarks https://phpprotip.com/2010/05/tek-x-closing-remarks/ https://phpprotip.com/2010/05/tek-x-closing-remarks/#comments Fri, 21 May 2010 17:56:28 +0000 http://phpprotip.com/?p=305
Image of Marco Tabini from Twitter
Image of Marco Tabini

Closing remarks will be given by Marco Tabini (@mtabini).

User groups are the lifeblood of the PHP community. I need to figure out how to start a user group in the Bloomington, Indiana area. I meant to last year but with figure a public declaration will motivate me more.

This year there was a lot of submissions for sessions. Next year and for conferences in general, if you have an idea for a talk, you should put your paper in. Don’t be afraid that people won’t want to hear about it.

Short and sweet closing so we get to eat now.

Reblog this post [with Zemanta]
]]>
https://phpprotip.com/2010/05/tek-x-closing-remarks/feed/ 1 305
tekx – cross platform php https://phpprotip.com/2010/05/tekx-cross-platform-php/ https://phpprotip.com/2010/05/tekx-cross-platform-php/#comments Fri, 21 May 2010 17:22:40 +0000 http://phpprotip.com/?p=298 Talk is by Elizabeth Marie Smith (@auroraeosrose).

PHP is officially supported on:

  • Unix
  • Windows
  • Linux
  • Embedded Systems
  • Risc
  • NetWare
  • I5 (or whatever it’s called this week)

PHP is starting to drop support for old systems. Like 5.3 won’t run on Windows 2000. So try to keep up to date. “If you’re running PHP 4, GTFO”

If you write your code right and you get a crazy manager that decides to move, you don’t have to deal with the headaches. For the most part, PHP takes care of the hard stuff. You just need to know the edge cases and know the key differences.

Unix has been for a long time. It was developed in 1969. Different distributions which fall under Proprietary, Open Source and BSD. There is a standard file system setup. Most are case sensitive. They are CLI based. Looking at the shared libraries. ELF and Mach-O. Linux is just a kernel.

Windows is case insensitive, NTFS is case sensitive. Half of the windows programs don’t know how to do this.  It is a GUI based system with drive letter abstraction for the files system. Using FAT really messes up PHP so if you’re using it, GTFO. Look in the superglobal for the environment data, never assume any environment data like installed on C:/.

DLL HELL

SxS is the fix and only 2 people on earth know how to use it (documentation…Sigh). Most important thing to know about DLLs: What is my search path. Your directories are listed in the PATH environment variable. You have to make sure to not set it system-wide. Bad juju. Going to have to link to the slides to the one titled “The Horrible Error”. Until windows has apt-get, you have to know your system.

Installation and Configuration

On Unix/Linux, you use a distribution or compile your own. Distributions will screw things up. They backport security fixes and add extensions that change behavior (like Suhosin). They alter header files. Stripping binaries of symbols, will turn experimental flags on, turn off default extensions (–disable-all), take forever to update versions, and using system libraries instead of bundled versions. Basically, you should build your own and don’t ever rely on the system distribution. Though distributions are great simple installs for newcomers and some offer support.

How do you decide? If it’s a production box, you NEED to compile it yourself. You need to know what you’re putting there. Distribution is good for the dev machine, unless you have a crazy person who will want to deploy it from the development box.

Windows is the only system for which PHP currently provides binaries. Use PHP’s binaries or compile your own if you’re brave. It’s not as scary as it sounds though. The dependencies are the trickiest part.

Environmental Differences

Mostly $_SERVER contents. Use isset() and never assume a variable is there. Also clean the data, don’t assume because it’s from the system that it is safe.

DON’T HARDCODE ANYTHING.

Main server difference between IIS and Apache is in the SAPI not the server.

Be careful with streams and sockets. Another gotcha is process spawning (com can help in some cases with wscript.shell stuff). Be sure about using platform specific stuff (pcntl, etc). If you run into somethign that doesn’t work the same way on multiple platforms, you should file a bug. If you want your bug to go to the front of the class, be sure to write a good test case.

Reblog this post [with Zemanta]
]]>
https://phpprotip.com/2010/05/tekx-cross-platform-php/feed/ 3 298
tekx – memcache and apc https://phpprotip.com/2010/05/tekx-memcache-and-apc/ https://phpprotip.com/2010/05/tekx-memcache-and-apc/#comments Fri, 21 May 2010 14:57:02 +0000 http://phpprotip.com/?p=292 Ben Ramsey (@ramsey) is giving a talk over memcache and APC. You use a cache to reduce the retrieval queries to the database, to reduce the number of requests made to external services, to reduce the time spent computing data and to reduce filesystem access.

Types of caches

  • file system
  • database
  • shared memory
  • RAM disk
  • object cache (memcached and APC)
  • Opcode cache (APC)

Memcached was created for livejournal as a way to solve having to making too many reads to their database. Memcached is a daemon that acts as a simple key/value dictionary.

Who uses it?

  • facebook
  • digg
  • youtube
  • wikipedia
  • moontoast
  • many others

Principles

  • fast asynchronous network I/O
  • NOT a persistent data store
  • It does not provide redundancy
  • data is not replicated across the cluster
  • It doesn’t handle failover

Daemons are not aware of each other. It does not provide authentication so you shouldn’t put themin public areas. They work great on a small and local-area network. A single value can not contain more than 1MB of data. Keys are strings limited to 250 characters.

Simple protocol with storage commands: set,add,replace,append,prepend, cas; retrival: get,gets; deletion: delete;increment/decrements: incr,decr; other: stats,flush_all,version, verbosity, quit.

default port is 11211.

there is a PECL memcache extension, the example is using the PECL memcached example. There is an important distinction. the memcached extention gives you extra information like if the item isn’t found.

Missed the APC part of the talk b/c I’m allergic to something there or coming down with something. Either way, was sneezing up a storm and feeling miserable. Skipped rest of the talk so I wouldn’t be disruptive.

Reblog this post [with Zemanta]
]]>
https://phpprotip.com/2010/05/tekx-memcache-and-apc/feed/ 3 292
tekx – lig’s talk on scalability and mysql https://phpprotip.com/2010/05/tekx-ligs-talk-on-scalability-and-mysql/ https://phpprotip.com/2010/05/tekx-ligs-talk-on-scalability-and-mysql/#comments Thu, 20 May 2010 21:56:30 +0000 http://phpprotip.com/?p=285 @lig will be talking about mysql 5.5 and scalability this session She is Senior Technical Support Engineer for MySQL.

We will be covering

  • semi-synchronous replication
  • performance schema
  • SIGNAL/RESIGNAL
  • more partitioning options
  • InnoDB – LOTS of InnoDB (performance and scalability improvements)

In 5.5 InnoDB will be the default!!! WOOT.

Default replication is asynchronous. Meaning master writes to binary log and the slave connects and “pulls” contents of the binary log. Bad thing is if the master crashes, there’s no guarantee that a slave has all committed transanction.

Simi-Synchronous Replication is an alternative to asynchronous replication. Midway point between asynchronous and fully syncronous. Master only waits for a slave to receive an event. Don’t have to wait for slaves to actually commit.

Performance schema tracks at an extremely low level. Just like Information schema, tables are views or temporary tables. Activation doesn’t cause any change in server behavior. This is designed for advanced users.

Think of SIGNAL as an exception, a way to “return” an error. You get exception-handling logic for stored procedures, stored functions, triggers,events and db apps.

RESIGNAL lets you pass error information up. Think of it as a catch. Requres an active handler to execute. Lets you program on your PHP side to catch that very specific handling.

Column partitioning is variants on RANGE and LIST partitioning. Allows the use of multiple columns in partitioning keys. All columns are taken into account for placing rows in partitions and for partitioning pruning. Supports the use of non-integer columns (DATE/DATETIME/strings…).

Major differences from just RANGE is you don’t accept expressions, only names in columns. Looking at an example. You’re looking at full tuples, not the individual parts. Both parts of the tuple have to pass.

List column allows for multiple column values. Do not need to convert values to integers to work with. Much easier to read. Example slide is very hot.

Mutex: Mutually Exclusive lock. Apparently horrible for concurrency.

Read Ahead is when InnoDB tries to be smart for you. Prefetch multiple pages in the buffer cache asynchronously. You can now control when InnoDB performs a read-aahead operation by setting innodb_read_ahead_threshold. Default is 56.

Edit/Note: There was a lot more discussed by @lig but it was over my head a bit. I’ll try to look at her slides and see if I can understand them. Trying the slideshare embed below so that others can see. Feel free to explain the rest of the slides to me because, like i said, they’re beyond my current skill level.

Reblog this post [with Zemanta]
]]>
https://phpprotip.com/2010/05/tekx-ligs-talk-on-scalability-and-mysql/feed/ 1 285
tekx – streams, sockets and filters https://phpprotip.com/2010/05/tekx-streams-sockets-and-filters/ https://phpprotip.com/2010/05/tekx-streams-sockets-and-filters/#comments Thu, 20 May 2010 19:31:11 +0000 http://phpprotip.com/?p=278
Image of auroraeosrose from Twitter
Image of auroraeosrose

@auroraeosrose‘s talk. Going over definitions so that everyone is on the same page.

Everything uses streams

  • include/require
  • stream functions
  • file system functions
  • many other extensions

What is a stream in php? They allow you access input and output very generically. You can read and write linearly and may or may not be seekable. Comes in chunks of data. Think of a 15GB file, would you want to read that into memory? Of course not, PHP will laugh at you for being stupid.

Thank Sara Goleman and Wez Furlong for the awesomeness of streams.

Edit: Originally didn’t get Wez’s name in the presentation, he thankfully provided me with his identity.

Talking about file_get_contents. What out for flock, transport and wrapper limitations, non-existent pointers (infinite loops can and will happen) and error handling. flock is broken on mod_php and network shares so don’t use it if you have those. The only way you have to handle errors is e_errors so you’ll have to make an error handler or shutup. Don’t use shutup, you’re a nub if you do. It’s better to take the time to make a custom handler.

Filters

Filters allow you to perform operations on stream data. They can be appended or prepended as well as can be attached to read or write. When a filter is added for read and write, two instances of the filter are created. It is advised to attach to read or write but not both even though both is possible. Watch your modes because filters are smart and will attach to the mode you assign to your stream.

Filter gotchas

  • data has an input and output state
  • when reading in chunks, you may need to cache in between reads to make filters useful
  • use the right tool for the job

Sockets

  • Network stream, network transport, socket transport
  • slightly different behavior from a file stream
  • bi-directional data (it goes both ways baby)

Best definition @auroraeosrose has heard is from UNIX that describes sockets as wormholes.

Sockets extension lets you do raw sockets, it’s outdated and all new APIS in streams and filesystem functions are replacements for the extension.

Process work like a stream. They are Black Magic

  • pipes
  • STDIN,STDOUT,STDERR
  • proc_open
  • popen

Stream contexts are how you tell your streams how to behave (parameters and options).

PHP 5.3 has built in streams. This is better seen in the slide, i should try to link to it later. We have a magic variable ($http_response_headers) to get headers when using http.  When you’re doing a POST, you still use file_get_contents. Extensions allow you to talk raw ssl as well as ssh,phar,zlib,bzip.

An example for sockets uses the exact way that paypal tells you how to do it.

Built-in filters

Generally useless but provided by php.

  • string filters
    • string.rot13
    • string.toupper
    • string.tolower
    • string.strip_tags
  • dechunk
    • decode remote HTTP chunked encoding streams
  • consumed (eats data and that’s all it does)
Reblog this post [with Zemanta]
]]>
https://phpprotip.com/2010/05/tekx-streams-sockets-and-filters/feed/ 2 278
tekx – new spl features in php 5.3 https://phpprotip.com/2010/05/tekx-new-spl-features-in-php-5-3/ https://phpprotip.com/2010/05/tekx-new-spl-features-in-php-5-3/#comments Thu, 20 May 2010 17:11:32 +0000 http://phpprotip.com/?p=270
Image of Matthew Turland from Twitter
Image of Matthew Turland

Matthew Turland is presenting. If you haven’t heard of him, you’re a nub (k, not really but I really wanted to put that in a post somewhere). He’s an auther for php|architect and author of Web Scrapign with PHP, a contributer to Zend Framework, lead developer of Phergie. So yeah, kind of a big deal. He currently works at Synacor which provides internet solutions to ISPs, media companies and advertisers.

The biggest change to SPL in 5.3 is the containers. Why containers? Arrays aren’t always great. The underlying hash table algorithm is not always ideal for the task at hand.

We’ll be looking at a lot of benchmarks. The code is available on github so you can compare the performance results for yourself.

Starting simple with a list. The corresponding structure for this is SplFixedArray. It is like an array but with fixed length. Only allows integers >= 0 for keys. As you have more elements, SPL overtakes arrays in being better for memory usage.

SplDoublyLinkedList

Same unlimited size as arrays without the associated hash map algorithm. There is less performance but better memory usage.

SplStack

Two very basic operations: push and pop. LIFO (Last In, First Out) data structure. The class doesn’t get better performance. There seems to be such an odd bottleneck. He’ll get to filing a bug later. ;p

SplQueue

Two operation: enqueue and dequeue. FIFO (First In, First Out) data structure. Again a correlation of element size to performance improvement.

SplHeap

Heaps operate based off of a comparison function. It internally reorders items based on the comparrison. Worst case is you can override SplHeap::compare(). Sorting happens as you insert the items. Again, smaller number of elements, you should stick with arrays but if you’re dealing with a large number of elements, you will see a performance difference.

SplPriorityQueue

Priority queues are very similar to triage. Similar to a heap, in fact it uses a heap internally for storage. Accepts a priority with the element value. Also compare can be overridden.

Sets

Similarity between doing queries with joins. Slide shows some crayons, wonder what the analogy is or maybe he likes crayons. I like crayons. Queries with UNION operation is a set operation. Hash table with objects for keys is the best visual for understanding composite hash map.  Set focuses on a group of values rather than individual values with operations like union, intersection, difference, etc. Using SplObjectStorage for the code examples.

Etienne Kneuss did a lot of the work on the new SPL features in PHP 5.3 so we can thank him for the new hotness. Etienne’s blog is a good resource for SPL as well as the SPL entries in the PHP manual. Elizabeth Smith has written up a post called “SPL to the Rescue” which is useful. Elazar will also put this presentation as a blog post on his site.

Future of SPL

  • Graphs
    • contains nodes and edges connecting them
    • directional/non-directional
    • cyclic/acyclic
    • connected/unconnected
    • edge costs
  • Trees
    • Acyclic unidirection graph
    • hierarchical in nature

Elazar did a great job of taking concepts that would normally go over my head and making them understandable.

Reblog this post [with Zemanta]
]]>
https://phpprotip.com/2010/05/tekx-new-spl-features-in-php-5-3/feed/ 7 270
tekx – xdebug https://phpprotip.com/2010/05/tekx-xdebug/ https://phpprotip.com/2010/05/tekx-xdebug/#comments Thu, 20 May 2010 16:13:07 +0000 http://phpprotip.com/?p=264
Xdebug
Image via Wikipedia

Today’s session is given by derick rethans, the author of xdebug so he might know what he’s talking about. 😉

Xdebug provides protections against things like stack overflow in PHP and infinite recursion. You can set this by setting the nesting level. It also provides a pretty formatted errors but not only is it pretty but it provides more information such as memory usage, time, function name and location on items in the call stack. It can also collect parameter information which shows the type with options to display the variable name (if possible) or values. It opts to minimal information to prevent crashing html displays with the browsers.

Another hot option is the ability to link to the files. The var_dump is overloaded to create a pretty, color-coded output. You can turn this off by setting the overload vardump option to 0.

Function trace

You have to be careful with this because it can cause you to run out of disk space. What you will see is trace start, memory usage, nesting level. Green lines will show return value. One useful case is for realpath() to determine if the issue is the file doesn’t exist. It can show if you change the property of an element as well as modified variables. The trace filename output can be specified which allows you to create files for specific URLs or whatever you require. There is a script in the contributions directory that can help in xdebug output analysis.

You can trace select parts of the application with the xdebug_start_trace() and xdebug_stop_trace() methods.

Scream

PHP’s @ (shutup) operator hides warnings and errors. Setting xdebug.scream=1 makes PHP ignore @.

Recording headers

Xdebug collects all headers being set, implicitly and explicitly. This is awesome for testing and unit-tests. The function is called xdebug_get_headers().

Code coverage

If you use xdebug_start_code_coverage(), it will only monitor files parsed after the call. There is a tool called ‘phpcov’ that will render the output in a better format. Since the tool was written by @s_bergmann, the views look similar to phpunit coverage view.

The profiler looks pretty hot, didn’t catch the name of it but it gives a visual representation of the program execution (I think…I’m not exactly sure what i’m looking at other than it deals with profiling).

DBGp = common debugging protocol.

Activating the remote debugger

You enable the remote ability, need to set the host and port. On the shell, you’ll need to export a parameter or set a unique session in your browser. Firefox and Chrome have plugins. easy Xdebug is for firefox and Xdebug Helper is available for Chrome.

Reblog this post [with Zemanta]
]]>
https://phpprotip.com/2010/05/tekx-xdebug/feed/ 2 264
tekx – continuous inspection and integration of PHP projects https://phpprotip.com/2010/05/tekx-continuous-inspection-and-integration-of-php-projects/ https://phpprotip.com/2010/05/tekx-continuous-inspection-and-integration-of-php-projects/#comments Wed, 19 May 2010 23:06:59 +0000 http://phpprotip.com/?p=251 “Countinous Integration is about preventing your developers from burning in Integration Hell” – @s_bergmann

Integration Hell

My code is perfect, yours is pretty good and then Martin…well, he’s a fucking idiot. (lost where he was going with this story). D=

Team member should integrate their work frequently. This reduces errors when managing multiple team members. The value (ROI) is in:

  • reducing risk
  • reduce repetitive processes
  • generate deployable software
  • enable better project visibility
  • establish greater product confidence
  • helps with late discovery of defects
  • prevents low quality software
    • coding standard adherance
  • prevents lack of deployable software

This practice focuses on software design that uses unit tests to prevent and detect defects. This all significantly works to provide significantly better quality software.

How do we implement it

We need build automation.

You start by identifying repetitive processes (running tests, analyzing source code, packaging, deployment). You want to make these processes a non-event.

Static code analysis

Lines of code gives a text-based metric for code size.

  • Lines of code (loc)
  • comment lines of code (cloc)
  • non-comment lines of code (ncloc)
  • executable lines of code (eloc)

Code duplication. Is it textually identical? token for token identical? functionally identical? Dupicate code contradicts code reuse. Co-Evolution of clones hinders maintenance. There is a tool called PHPCPD (a copy-paste detector). Highest they’ve seen is 16% duplication in a code base of 5 million lines of code.

code complexity

Cyclomatic complexity counts the number of branching points such as if, for,foreach,while,case,catch,&&,””,ternary operatory. NPath Complexity counts the number of execution paths. Higher complexity leads to more errors and makes testing harder.

You can analyze code with “sniffs”.

build automation

  • apache ant
  • gnu make
  • phing
  • rake
  • shell scripts

Reviewing some of the tools that got covered in another talk I blogged about. If anything new/interesting is added, i’ll list it. @s_bergmann recommends Hudson (http://hudson-ci.org). Apparently very easy to use and is used by some big names like Sun.

Reblog this post [with Zemanta]
]]>
https://phpprotip.com/2010/05/tekx-continuous-inspection-and-integration-of-php-projects/feed/ 1 251
tekx – code release & management https://phpprotip.com/2010/05/tekx-code-release-management/ https://phpprotip.com/2010/05/tekx-code-release-management/#comments Wed, 19 May 2010 21:56:52 +0000 http://phpprotip.com/?p=245
Image of EliW from Twitter
Image of EliW

@eliw starts off laying down his street cred. He’ll be covering how to control the process of version control. As we’ve been going over the past week, use version control. The talk will be focusing on subversion as the technology but the talk will be dealing with the higher level concepts.
Basic Version Control Terminology

  • commit/check-in
  • branch
  • tag
  • trunk
  • merge

Subversion thinks in terms of a directory structure. Projects are subdirectories of a repository. The mainline (trunk) is a subdirectory of the project. Branches and tags have parallel directories.

Your release management should come up with general rules that you apply.

  • will you allow intermediate (non-working) checkins?
  • where should you check code in?
  • are there places that as less controlled?
  • how does this flow into releases?
  • what about tags vs branches vs trunks?

I’m not seeing a reason to allow intermediate (non-working) checkins, that seems like a recipe for disaster.

No matter what your style of management, your trunk should contain the ‘core’ codebase. Branches are uses to segment off areas of responsibility. Tags should only be used to mark a specific state of code, a release.

Different Branch methodologies (3 main styles)

  • stage branches
  • feature branches
  • release branches

With stage branches, all work is commited against the trunk. When you’re ready for a release, you merge into staging then after testing, you merge into production. The catch with this strategy is bringing your changes back to trunk. No parallel work, old patches and room for error with this strategy. There is no way to patch old releases. This setup is about moving forward constantly which works for websites but if you have clients on different versions of an application, things will become a bit hairy. Forgetting to merge things back can cause errors.

With feature branches, all new work you are doing, is done in its own branch. You merge the branch back into trunk after you’ve tested it. Trunk then is tagged as needed for phases (for testing/QA,Releasing,etc). Parallel work and long scale work become easy. Downside is that you’re often creating branches (depends on your VCS), there is a lot of merging, no old patches and fixes are complicated.

So far my current release system seems to be a combination of stage and feature branching. I’ll tell you right now that it has been a pain in my ass. Hopefully the next strategy will be my…oh yeah, there is no magic bullet.

With release branches, all new work done on trunk and when ready for realease, creat a versional branch (/branches/v3.0). You can test against the branch and and make bug fixes against the branch. You then tag the branch with a versional tag for release. One of the big pros is that the maintenance work is easy. It is OK with long scale work. There’s some parallel work and very little merging. The only time you merge is when you do a bug fix on the branch, it has to be merged back to trunk. Cons include Branch/Tag creation. The biggie is that this assumes a single goal.

Options for pushing code live:

  • have a script
  • handle multiple machines
  • use for all phases staging/testing
  • have a rollback procedure
  • multiple ways to accomplish
  • incorporate everything together
    • Services, DB, PHP, etc

This is a great talk so far, I’m having a lot of great ideas for my release structure but something tells me that where I work now won’t let me put this in place. D=

Live check out of the server is very simple. Big drawback is conflicts, hard to automate & rollback. As of a year ago, facebook does this. Awesome, I’m doing something that’s facebook is doing so they’re having the same pain in the ass problems just mentioned before that I have had.

Another option is to export and rysnc. The export part is what I’m missing from what I currently do at work. Though I think if I can make a script for pushing, I might be able to cut out the live check out of my current procedure. You’ll have problems with partial updates with rsync. You might have web traffic while codebase is partially updated.

Solutions?

  • take website offline
  • use symlinks

Rolling back with rsync means you have to rsync again. Using symlinks means you just change the symlinks.

Overall, amazing session that helped out on some issues I was trying to work out recently. Tek has paid for itself twice over and it’s only halfway done!

Reblog this post [with Zemanta]
]]>
https://phpprotip.com/2010/05/tekx-code-release-management/feed/ 1 245