XNSIO
  About   Slides   Home  

 
Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
     
`
 
RSS Feed
Recent Thoughts
Tags
Recent Comments

Selenium India 2018 Conference – Keynotes and Pre-Conference Workshop Announced

Friday, March 23rd, 2018

The Selenium India 2018 Conference is thrilled to announce our keynote speakers:

Keynote Speakers

Program: We’ve received a total of 139 proposals via our open submissions system. Our program team is currently reviewing the proposals and we should have the first list of speakers next week.

Pre-Conf Workshop: We are happy to offer the following four Full-day deep-dive workshops on June 28th (pre-conf):

About the Conf:

The Selenium Conference is a non-profit, volunteer-run event presented by members of the Selenium Community.

The goal of the conference is to bring together Selenium developers & enthusiasts from around the world to share ideas, socialize, and work together on advancing the present and future success of the project.

Sponsors:

Selenium Conference is a great opportunity to meet and get to know hundreds of the world’s top Selenium talent. Whether you’re looking to connect with experienced developers, testers, and users of Selenium, hire automation engineers, promote your product, or just give back to the community, this is the place to be. We love helping our sponsors find innovative ways to interact with the community and achieve a great return on their investment in the conference.

Selenium Conference is organized by volunteer, passionate members of the Selenium community. We rely on corporate sponsorship to keep ticket prices low for attendees, as well as help cover the cost of the venue, food, beverages, t-shirts, and more.

Check out our sponsorship guide!

Spread the Word: If possible, we request you to print these posters and put it on your office notice board or forward it to folks in your network.

  

Presenting SeConf 2014, the official Selenium Conference in Bangalore on Sep 5th and 6th

Thursday, June 12th, 2014

We are delighted to announce that this year we’ll be hosting the 4th annual (official) Selenium Conference in Bangalore, India. This is your golden opportunity to meet the selenium and test automation community in general.

The goal of the conference is to bring together Selenium developers & enthusiasts from around the world to share ideas, socialise, and work together on advancing the present and future success of the project.

If you are interested in presenting at the Selenium Conf, please submit your proposals at http://confengine.com/selenium-conf-2014

Registrations have already started. Register now at http://booking.agilefaqs.com/selenium-conf-2014

To know more about the conference, please visit http://seleniumconf.org

Simple Regression Testing for Static Web Sites

Wednesday, November 18th, 2009

For Freeset, I’ve always been in the quest of Simplest Thing that Could Possibly Work. In a previous post, I explained how we’ve embraced an ultra-light process (call it lean, if you like) to build their e-commerce site.

In that post, I’ve talked about our wish to create a Selenium test suite for regression testing. But it never got high enough on our priority list. (esp. coz we mostly have static content served from a CMS as of now).

While that is something I wanted to tackle, last night, when I was moving Industrial Logic and Industrial XP‘s site over to a new server hardware, I wanted some quick way to test if all the pages were correctly displayed after the move. This was important since we switched from Apache to Nginx. Nginx has slightly different way to handle secure pages, etc.

So I asked on Twitter, if anyone knew of a tool that could compare 2 deployments of the same website. Few people responding saying I could use curl/wget with diff recursively. That seemed like the simplest thing that could work for now. So this morning I wrote a script.

rm -Rf * && mkdir live && cd live && wget -rkp -l5 -q -np -nH http://freesetglobal.com && cd .. && mkdir dev && cd dev && wget -rkp -l5 -q -np -nH http://dev.freesetglobal.com && cd .. && for i in `grep -l dev.freesetglobal.com \`find ./dev -name '*'\`` ; do sed -e 's/dev.freesetglobal.com/freesetglobal.com/g' $i > $i.xx && mv $i.xx $i; done && diff -r -y --suppress-common-lines -w -I '^.*' dev live

I’m planning to use this script to do simple regression test of our Freeset site. We have a live and a dev environment. We make changes on dev and frequently sync it up with live. I’m thinking before we sync up, we can check if we’ve made the correct changes to the intended pages. If some other pages show up in this diff that we did not expect, it’s a good way to catch such issue before the sync.

Note: One could also use diff with -q option, if all they are interested to know is which pages changes. Also note that under Mac, the sed command’s -i (inline edit) option is broken. It simply does not work as explained. If you give sed -i -e …., it ends up creating backup files with -e extension. #fail.

Ultra-light Development and Deployment Example

Monday, October 26th, 2009

Over the last year, I’ve been helping (part-time) Freeset build their ecommerce website. David Hussman introduced me to folks from Freeset.

Following is a list of random topics (most of them are Agile/XP practices) about this project:

  • Project Inception: We started off with a couple of meetings with folks from Freeset to understand their needs. David quickly created an initial vision document with User Personas and their use cases (about 2 page long on Google Docs). Naomi and John from Freeset, quickly created some screen mock-ups in Photoshop to show user interaction. I don’t think we spent more than a week on all of this. This helped us get started.
  • Technology Choice: When we started we had to decide what platform are we going to use to build the site. We had to choose between customer site using Rails v/s using CMS. I think David was leaning towards RoR. I talked to folks at Directi (Sandeep, Jinesh, Latesh, etc) and we thought instead of building a custom website from scratch, we should use a CMS. After a bit of research, we settled on CMS Made Simple, for the following reasons
    • We needed different templates for different pages on the site.
    • PHP: Easiest to set up a PHP site with MySQL on any Shared Host Service Provider
  • Planning: We started off with an hour long, bi-weekly planning meetings (conf calls on Skype) on every Saturday morning (India time). We had a massively distributed team. John was in New Zealand. David and Deborah (from BestBuy) were in US. Kerry was in UK for a short while. Naomi, Kelsea and other were in Kolkatta and I was based out of Mumbai. Because of the time zone difference and because we’re all working on this part time, the whole bi-weekly planning meeting felt awkward and heavy weight. So after about 3 such meetings we abandoned it. We created a spreadsheet on Google Docs, added all the items that had high priority and started signing up for tasks. Whenever anyone updated an item on the sheet, everyone would be notified about the change.
  • User Stories: We started off with User Persona and Stories, but soon we just fell back to simple tasks on a shared spreadsheet. We had quite a few user related tasks, but just one liner in the spread sheet was more than sufficient. We used this spreadsheet as a sudo-backlog. (by no means we had the rigor to try and build a proper backlog).
  • Short Releases: We (were) only working on production environment. Every change made by a developer was immediately live. Only recently we created a development environment (replica of production), on which we do all our development. (I asked John from Freeset, if this change helped him, he had mixed feelings. Recently he did a large website restructuring (added some new section and moved some pages around), and he found the development environment useful for that. But for other things, when he wants to make some small changes, he finds it an over kill to make changes to dev and then sync it up with production. There are also things like news, which makes sense to do on the production server. Now he has to do in both places). So I’m thinking may be, we move back to just production environment and then create a prod on demand if we are plan to make big changes.
  • Testing: Original we had plans of at least recording or scripting some Selenium tests to make sure the site is behaving the way we expected it to. This kind of took a back seat and never really became an issue. Recently we had a slight set back when we moved a whole bunch of pages around and their link from other parts of the site were broken. Other than that, so far, its just been fine.
  • Evolutionary Design: Always believed in and continue to believe in “Do the Simplest, Dumbest, thing that could Possibly work“. Since we started, the project had taken interesting turns, we used quite a lot of different JavaScript libraries, hacked a bit of PHP code here and there. All of this is evolving and is working fine.
  • Usability: We still have lots of usability and optimization issues on our site. Since we don’t have an expert with us and we can’t afford one, we are doing the best we can with what we have on hand. We are hoping we’ll find a volunteer some day soon to help us on this front.
  • Versioning: We explored various options for versioning, but as of today we don’t have any repository under which we version our site (content and code). This is a drawback of using an online CMS. Having said that so far (been over a year), we did not really find the need for versioning. As of now we have 4 people working on this site and it just seems to work fine. Reminds me of YAGNI. (May be in future when we have more collaborators, we might need this).
  • Continuous Integration: With out Versioning and Testing, CI is out of question.
  • Automated Deployment: Until recently we only had one server (production) so there was no need for deployment. Since now we have a dev and a prod environment, Devdas and I quickly hacked a simple shell scrip (with mysqldump & rsync) that does automated deployment. It can’t get simpler than this.
  • Hosting: We talked about hosting the site on its own slice v/s using an existing shared host account. We could always move the site to another location when our existing, cheap hosting option will not suit our needs. So as of today, I’m hosting the site under one of my shared host account.
  • Rich Media Content: We questioned serving & hosting rich media content like videos from our site or using YouTube to host them. We went with YouTube for the following reasons
    • We wanted to redirect any possible traffic to other sites which are more tuned to catering high bandwidth content
    • We wanted to use YouTube’s existing customer base to attract traffic to our site
    • Since we knew we’ll be moving to another hosting service, we did not want to keep all those videos on the server which then will have to be moved to the new server
  • Customer Feedback: So far we have received great feedback from users of this site. We’ve also seen a huge growth in traffic to our site. Currently hovering around 1500 hits per day. Other than getting feedback from users. We also look at Google Analytics to see how users are responding to changes we’ve made and so on.
  • We don’t really have/need a System Metaphor and we are not paying as much attention to refactoring. We have some light conventions but we don’t really have any coding standards. Nor do we have the luxury to pair program.
  • Distributed/Virtual Team: Since all of us are distributed and traveling, we don’t really have the concept of site. Forget on-site customer or product owner.
  • Since all of this is voluntary work, Sustainable pace takes a very different meaning. Sometimes what we do is not sustainable, but that’s the need of the hour. However all of us really like and want to work on this project. We have a sense of ownership. (collective ownership)
  • We’ve never really sat down and done a retrospective. May be once in a while we ask a couple of questions regarding how something were going.

Overall, I’ve been extremely happy with the choices we’ve made. I’m not suggesting every project should be run this way. I’m trying to highlight an example of what being agile really means.

Unable to initialize TldLocationsCache

Thursday, July 9th, 2009

On one of the projects we are using Cargo Maven Plugin to run an embedded Jetty server for our builds. Out of the blue, today, I started getting the following error when I was running my Selenium Tests after deploying the application.

1
2
3
4
5
6
7
WARN:  Nested in org.apache.jasper.JasperException: org.apache.jasper.JasperException: Unable to initialize TldLocationsCache: null:
org.apache.jasper.JasperException: Unable to initialize TldLocationsCache: null
at org.apache.jasper.compiler.TldLocationsCache.init (TldLocationsCache.java:253)
at org.apache.jasper.compiler.TldLocationsCache.getLocation (TldLocationsCache.java:224)
at org.apache.jasper.JspCompilationContext.getTldLocation (JspCompilationContext.java:526)
at org.apache.jasper.compiler.Parser.parseTaglibDirective (Parser.java:422)
...

No clue why this is happening. Surprising this is, this issue cannot be reproduced on a Windows box. Only on my Mac with JDK 1.6 and Maven 2.0, I’m getting this issue.

On goolging for this issue, I make across this bug report which kind of indicated that this might be an issue with the Cargo Maven Plugin. On upgrading the plugin to version 1.0, the issue was solved. 🙂

Need to find out what caused the problem in the first place.

Refactoring Legacy Projects: Scaffolding Technique

Sunday, May 24th, 2009

If you’ve inherited a Legacy Project (project without any tests) and say you want to enhance an existing feature, where do you start?

In such situations, I find myself building some form of workflow tests (scaffolding). I start off using a record and play back testing tool to record couple of scenarios for the feature, I want to enhance. Most often, I would take the recorded tests and covert them into a re-entrant, independent scripts. So that I can execute them over and over again, without needing manual intervention. Basically this would mean, automating the set up and tear down of the application’s external dependencies like data-stores, email servers, etc correctly. This should not take more than a couple of hours to configure.

This helps me build the initially safety net to start off. This also gives me a decent understanding of how the feature works. Now I can go inside the code, change something really small and see what impact it has on my tests. Some times I tweak my test to see what impact it has on the feature. Basically I’m using this test as a probe to gain deeper understanding of the feature’s functionality.

Doing this give me some confidence to jump in and start refactoring the code, so that I can create an inflection point, break dependencies and start writing unit tests around the core of my feature. In couple of hours, I should be able to build a solid safety net, around my feature using unit tests and/or business logic acceptance tests.

At this point, I almost always, go and delete the initial workflow test that I had built. This is the reason, I call this approach as the scaffolding technique.

  • Build some initial workflow tests to help you get in there,
  • Make the necessary code/config changes to write direct tests
  • Gradually build a solid safety net around the feature
  • The scaffolding (initial workflow tests) did its job, now its time to throw them away
I demonstrate this technique when we do the Refactoring Fest. We take VQWiki (an open source Java wiki, with Zero tests) and build our scaffolding using Selenium.
    Licensed under
Creative Commons License