Agile testing? What is it and what should the QC/testers do in each sprint?

Report
Question

Please briefly explain why you feel this question should be reported .

Report Cancel

Hi all,
In software development, you might be get familiar with Agile/SCRUM and Agile Testing. As I know, the term Agile typically refers to any approach to project management that strives to unite teams around the principles of collaboration, flexibility, simplicity, transparency, and responsiveness to feedback throughout the entire process of developing a new program or product.
However, what're the testing activities that you should do in each sprint? What're the testing methods that should we use to control the quality as well as possible?
Thanks in advance for your contribution.

solved 1
General 11 Answer 1492 views 3

Answers ( 11 )

  1. Thanh Huynh
    0
    May 22, 2015 at 12:35 pm

    Please briefly explain why you feel this answer should be reported .

    Report Cancel
    @Phuoc, it's a great question.

    IMHO, Testing is testing disregard of what shop you are in. Agile does not make testing become a better testing in itself. Agile takes advantage of testing to make it valuable.

    However, if you are test in Agile shop and transitioned from previous model, you need to change the mindset as the way Agile works. You're right that Agile put focus on collaboration and as a tester, you need to be collaborative. Be ready to constant communicate with team to make things work instead of "this-is-not-my-job" attitude.

    All the practices/techniques in testing is unchanged in Agile such defect detection, defect preventions, defect reporting etc.

    Re: "What're the testing methods that should we use to control the quality as well as possible?"
    >Quality is team's work not just a single group or person. Don't make things more difficult by putting the responsibility of "control quality" on to our shoulders. Let's work closely with the team and together make good quality products.
  2. phuoc.nguyen8811
    1
    May 22, 2015 at 2:03 pm

    Please briefly explain why you feel this answer should be reported .

    Report Cancel
    @aThanh:
    Yes, agree that in Scrum, we don't separate the role like PM, Developers, Tester in detail oriented. We just have Product Owner, Scrum Master and team member. And the quality of product is team's work.
    But, in my case, I'm having barracks about the times between integration testing and regression testing. Imagine that we have 1 sprint in 3 weeks including 10 user stories (U1 --> U10) , 1st week to do planning, design , 2nd to do the implementation and 3rd to freeze code and fix the bugs/issues. For example, in 2nd week, we have user story , U1 completed on Monday, U2 completed on Tuesday, etc . It means right after each user stories completed, we have to "Jump in" and do Functional test. And after all user stories are finished, we have to do "integration testing" when merge code. But it's so near to release day, and actually we don't have enough time to do "regression testing" for all impacted, bug fixing. Then it's painful when releasing 1 version like that.
    So, through my case do you have any advice to help me through the pain?
  3. Thanh Huynh
    0
    May 22, 2015 at 2:58 pm

    Please briefly explain why you feel this answer should be reported .

    Report Cancel
    @Phuoc,

    I see your point.

    *Re: "For example, in 2nd week, we have user story , U1 completed on Monday, U2 completed on Tuesday, etc . It means right after each user stories completed, we have to "Jump in" and do Functional test", what do you mean by "completed"? I suppose you mean Done. What is your team DoD (Definition of Done)? More often, when a User Story is Done, the code is Done and Functional test is Done. In this case, your DoD does not include Function testing and that causes your pain.

    *Re: "And after all user stories are finished, we have to do "integration testing" when merge code. But it's so near to release day, and actually we don't have enough time to do "regression testing" for all impacted, bug fixing.", this happens quite often from time to time and it's not uncommon in most of Agile shop. Luckily, we have things call TDD and Continuous Integration to solve that problem. To be honest, I've never had hands-on experience with TDD but you can research more on that. The idea is to automate things and makes them integrated and tested as soon possible.

    Hope your pain is better now... :-)
    Best answer
  4. Phuoc Nguyen
    0
    May 22, 2015 at 3:35 pm

    Please briefly explain why you feel this answer should be reported .

    Report Cancel
    Great, I'm feel better and have some ideas through the pain.
    I understand your idea about automation, but it cause another pain. Our features are changed quickly and un-stable. We've already tried with Selenium (Web Driver) and built under POM or Cucumber framework, but each time the feature is changed --> the CSS, XPaths changed accordingly then we had to re-capture those elements again (we store this in .properties file). It's difficult and waste time to do both of manual testing and automate. I hope I can find another ways to automate better. :(
    From that, we just tried to do "Stub implementation" when coding, build modules as services and call REST API when do integration test. :)

    Thank you again for your clarification. It helps me a lot.
  5. Thanh Huynh
    1
    May 22, 2015 at 4:34 pm

    Please briefly explain why you feel this answer should be reported .

    Report Cancel
    @Phuoc,

    There are many layers to perform automation and GUI layer is much more troublesome and easy to be broken like you said. It's not clear to me why you are following GUI layer? Are you trying to check GUI specifically? If it's not mandatory to check GUI and REST API service works for you, let's reconsider the goal when you automate.

    Automation takes time and effort to fully see its value, so take your time :-)
  6. Thong Khuat
    2
    May 22, 2015 at 5:28 pm

    Please briefly explain why you feel this answer should be reported .

    Report Cancel
    Holla,

    Phuoc, I have just read your final question and let's start with some questions from me in your case:
    1 - How long have your scrum team got into this trouble?
    2 - From your idea is the current CI automation system helpful to the whole process?
    3 - How much effort you spent for adhoc, unit test, integration test and "regression test" for each US?
    4 - Do you have a bug density matrix each release for quality report?

    I bring those question into this topic to let us re-look into the situation you get with more details. Let me dig into some points:

    1 - Looks like your scrum team is getting trouble with the workload estimation. 1 sprint cannot afford quality for the number of US picked up. My team got into this problem once, and we solved it by fixing the workload after each sprint until we find well with each estimation for a sprint. For example, we have 10 US in the first sprint and we release them with low quality. Then we reduced the number of USs in the next sprint to have more time in regression testing and automation maintenance. We kept redefining you Definition of Done and balanced more time in the test types we need to make sure the quality.

    After about 5 or 6 sprints, we had good workload and enough testing for each USs and we find well in catching up the process with other teams. I would say that, applying Scrum is a long time of redefining and optimizing our process through retro to reach the quality. We get lesson learn and improve time by time not just running along the deadline

    2 - We tried to determine the testing process to adapt the difficulty of the product by the quality report. We re-looked at all the bugs we got and built the bug density for each release to find out which areas we often missed the coverage. Then in next sprints we would balance between unit test/regression test to make those features stable.

    3 - We have never given up our automation to help us in regression test. We always have time budged for script maintenance. If you feel that the automation does not quite help your job and you feel maintaining the script is a burden, then you really get the trouble with the automation. Think about stopping it or making it better.

    Hope this help.
  7. Phuoc Nguyen
    1
    May 22, 2015 at 7:29 pm

    Please briefly explain why you feel this answer should be reported .

    Report Cancel
    @a Thong:
    Long time to see you :D.

    My comments in your questions:
    1 - How long have your scrum team got into this trouble?
    <== My team got this trouble about 3-4 months, through out 4-6 sprints. And YES, you're right, we're getting trouble with workload estimation. In previously sprints, we didn't count the effort for testing into work load. But after we count estimation for testing into each USs, we had pressure when Product owner(PO) always put backlog into sprint.
    It means: for example: PO always concern about what does developer do why the USs was being tested? Do they have to wait until have bug and fix it then they put another USs into sprint. And of course, PO think the USs is small and can be done in short time. It's sound stupid ... and does not look like SCRUM?
    2 - From your idea is the current CI automation system helpful to the whole process?
    <== Yes, it's helpful to our team when we can flexible simulate module by calling REST API after we finished 1 feature.
    3 - How much effort you spent for ad-hoc, unit test, integration test and "regression test" for each US?
    <== We based on checklist which we created on 1st week of sprint to do. In my team, the owner of each USs (often developers) will write unit test (jUnit) . Then when the USs passed "Acceptance criteria", the US will be tested .
    4 - Do you have a bug density matrix each release for quality report?
    <== No, we don't have bug density matrix. May I will create another question about how to calculate defect density matrix. :)
  8. Thong Khuat
    1
    May 23, 2015 at 10:11 am

    Please briefly explain why you feel this answer should be reported .

    Report Cancel
    Phuoc,

    If you can see the root cause and you know the agile principle you will know how to fix the issue. First, your team should communicate more at the print planning to deal how many effort for each US you need to work. "Don't let the PO put pressure on your estimate decision". Use the evidences and number to prove your estimation and change his mind.

    "PO always concern about what does developer do why the USs was being tested? Do they have to wait until have bug and fix it then they put another USs into sprint." >>>>> If the coder just finish his job quickly and leave the code to the tester with many bugs means his task has bad quality and slow down the process of the whole team. You should bring the issue into retro so he can balance more effort to his job and care more about quality not just coding. If we have not done the existing USs, we should not bring more US into the sprint. As we often say, there is no constant role in a scrum team, so if some1 is free while other ones get busy he should support other ones tasks. If not, yes it does not look like a SCRUM team :)
  9. quangtringuyen
    2
    June 9, 2015 at 4:13 pm

    Please briefly explain why you feel this answer should be reported .

    Report Cancel
    If you know the different between traditional software development methodology and Agile methodology.

    In Agile, it's required everybody participate into that scrum must have highest responsibilities not only on his job but also with scrum team.

    Agile tester must participate in all the meeting from kick-off meeting till retrospective meeting. And what will he do in those meetings?
    1. Kick-off meeting: help tester to understand what stakeholder want developers (include programer, tester, PM, BA...) make for him. Define sprint Backlogs, number of sprints, duration of sprints, deadline...

    2. Sprint planning meeting: tester with scrum team define what will they do in this sprint, define how many US will be include, together BA to understand each US, define the complexity and estimate how much time/effort will be used (usually define base on number of point, each point = 8 hours).

    3. Daily standup meeting: tell scrum team that what you are doing, what will you do, are you going in on the right track.

    4. Sprint review meeting: tester and programmer will demo what did he do for stakeholder and get decision from him

    5. Retrospective meeting: lesson leant

    As you can see, tester participate into most of phase, meeting of software development process with highest responsibility. With me, we usually define plan for each sprint, starting from sprint planning till sprint review.
    Each sprint we will have 4 weeks. First week for Sprint Planning (usually only takes 2 days), from there to end of next week will be used for coding and that Friday we call dev cut off, next one week is for testing and end of that week is test cut-off date and the last week is for sanity test, sprint demo. Usually we will do Sprint planning for next sprint in last week of each sprint so that we have more time for us to work on each sprint.
    Tester will join Sprint planning for understand USs, estimate how much time each US take. Start investigate and design after sprint planning end and design must be finished before Dev cut-off date. He will starting testing after Dev cut-off date and must be finished before Test cut-off date. Do sanity, demo to stakeholder in next week also join in Sprint planning for next sprint.

    You can see, tester dont have free time in Agile, from starting to ending. I dont know what are you confusing. But if you wonder why you always miss deadline so i think Scrum master doesnt do his job good :)
  10. Thanh Huynh
    0
    June 9, 2015 at 10:08 pm

    Please briefly explain why you feel this answer should be reported .

    Report Cancel
    @Quang,

    Awesome, +100 to your comments. :D
  11. Paul Seaman
    3
    June 13, 2015 at 2:37 pm

    Please briefly explain why you feel this answer should be reported .

    Report Cancel
    There are a couple of things that have caught my eye skimming through this thread. The first is that Thanh quite rightly points out that there is no "Agile testing" but there is testing in Agile. Testing in agile is using all those test skills you would have/could have used used under waterfall or any other development method.
    I'm also quite surprised that in a 3 week sprint a week would be spent planning. That is a lot of time if a full week is being spent just on planning on what is to be done. Why not just plan enough to get started and get some of the story cards to done earlier in the sprint? @quangtringuyen mentions story points having to baselined (1 point = 8hrs). This is not a requirement for story points. Story points are intended to be abstract, a measure of relative size. Mike Cohn writes a very good explanatory piece on his mountain goat website. You can choose to make a story point equal to 8 hrs if you, and the team agree to, but it doesn't have to be that way. My personal preference is to not equate points to hours. I've only skimmed the comments to date but has anyone mentioned the importance of keeping user stories small? Vasco Duarte in his No Estimates book suggests no more than half a day development where possible, the largest a story should get is half a sprint (and at this point it becomes a risky item). Set your stories size so that you are quickly getting stories to done, user value is steadily crossing over to done. Also use the INVEST principle when writing stories (you Google this for an explanation)

Leave an answer

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Phuoc Nguyen R

About [Phuoc Nguyen]