A week ago we reported that Uday is looking at organizing a regular Jenkins meet-up in Silicon Valley. This has made a progress since then, and this evening we'll get together to figure out logistics for the first meet-up:
- August 5th, Wednesday 6:30 PM - 7:30 PM
- Starbucks, 750 Castro St, Mountain View, CA 94041
The agenda is:
- Determine the date for the first meet up
- Speakers for the second slot. Kohsuke will be presenting first.
- Future topics of interest for JAM
- Sponsors / Volunteers
- Ideas to make the JAM relevant and interesting for the extended community to participate and share their implementations
- Q & A
Uday and I will be there, and Uday told me that he heard from another guy who will join us. If you are around and is willing to come over, we'd love to see you. If you are interested, I'd also encourage you to join the Jenkins events list, where a discussion is happening.
As you may have noticed, our wiki and issue tracker were unavailable from Thursday to Sunday last week. What happened?
We host parts of our infrastructure at the Open Source Lab at Oregon State (OSUOSL), including the databases for these two services. So far, there's no post mortem by OSUOSL (they expect to post one later this week), so we need to piece together what we know.
The databases for the wiki and issue tracker became inaccessible around midnight/12 AM Thu/Fri night (all times UTC). Due to the large number and size of databases on that server, pulling from backups, restoring from backup and replaying the binlogs took them quite a while. During that time, we put up a maintenance screen on the wiki (and messed up the one for Jira, so there were connection timeouts instead).
The databases were back around 3 AM on Sunday. We disabled the maintenance screens around 6 PM later that day.
While this was a rather lengthy outage, it could have been much worse. We lost none of the data, after all. We thank the OSUOSL team for their efforts getting everything back up over the weekend!
Have a look at our brand new blog post " Enhanced Features of Image Validation " for all the details.
Are you willing to do something crazy?
Think about this as a challenge.
You may even see it as a way to change how you are approaching your work, and how your company perceives the value you provide to your projects.
I want you to do the following:Stop Testing Right Now
and Start Doing Your Job!
Yes, you read correctly; and no, there is no mistake in what I wrote.
I want you to take a day, two days, maybe even a full week if you can. I want you not to run a single test during this time, and instead concentrate on finding ways in which you can provide value to your projects by doing things that are not related to running the same tests that you usually run.We are always too busy to lift our heads up from our test…
I’ve been working with thousands of testers for the last couple of years as part of my job in PractiTest, and there is something that I keep hearing all the time: “we are too busy testing”.
I’ve heard this in English (with US, UK, Australian and South African accents). I’ve heard this in Spanish (in many tones and regional accents too). I’ve heard this in Hebrew, Italian and Portuguese. It’s been translated to me from Swedish, Finish, Russian, Dutch, Deutsch, Danish, Korean, Hindi, Mandarin, Arabic, French, and I am sure I am missing a whole bunch of other languages too…
The reality of most testers and test managers out there is that we are too busy to lift our heads from our (always) urgent testing tasks, and we have no time to think or do any of the other important tasks that are not related directly to running tests.
The problem is that by focusing always on the urgent, we leave aside the important tasks. And for those of you who are wondering Urgent is not always the same as Important.
Not only that, but also, if you think about it, many times the reason some tasks or tests become urgent, is only because we did not get to do something else that was important in the first place…Force yourself to think:
What else should I be doing for my project right now?
Unless you literally force yourself to think, by not running any tests for a couple of days, you will not be able to find the additional things you could or should be doing for your projects and for your team.
Many times we cannot do this by ourselves. If this is the case go ahead and brainstorm with your team, if possible go and talk to other people outside your team, they may have some good ideas on the things that may be useful.
A way to start the brainstorming process might be to define what is the value you should be bringing to your team. Ask yourself what do the other team members gain from your participation in the project? How can you help your R&D Manager, your Product Manager, maybe even your CEO?
If you want a hint, you are not here to run every single test in the book. You are here to provide smart visibility into the important areas of your product and your process, and to help your stakeholder steer the project in best course to meet its objectives.
How do you translate this into concrete actions and tasks is something only you can do!Some ideas of “other” things you might be doing with your time…
It is obvious that the ideas on what can be improved need to come from you and your team, but let me give you some pointers based on my experience with other organizations where we’ve done this type of exercise:
– Find more efficient ways to run your tests. For example, evaluate how an automation framework could enhance your process.
– Check if you can improve the testing and development environments of your team. Think if there are ways to deploy your system faster and with less human intervention.
– Understand how to communicate your testing results and your findings in ways that will reach more stakeholders in a clearer and more direct way. How can you talk their language?
– Find ways to understand what information is required by your stakeholders to make their decisions, and how can you provide it to them?
– Get more involved on the technical aspects of your project. Be part of the design process, do more code reviews, etc.
– Start mapping and analyzing the Risks of your project.
– Think how to measure the quality of your product post-release. Learn how to improve your development and testing based on these results.
– Look for ways to communicate with your end users and get feedback from their positive and negative experience with your system.
And since this is only a very partial list, you can also look around the Internet for inspiration on what other tasks you may do.What other things are you doing that are not related to testing?
Do you have other interesting things that you and your team do for your project that may help others?
Go ahead and share them with us by adding them as comments to this post!
For over 3 years now I have been working closely with hybris. During this time we have demonstrated a lot of value for customers in optimizing custom hybris configurations at a critical point in the initial deployment of their sites. Understanding factors that contribute to user experience prior to launching a site is critical for […]
The post Hybris Performance Review: 10 System Health Checks appeared first on Dynatrace APM Blog.
I enjoy listening to the “Testing in the Pub” podcasts with Stephen Janaway and Dan Ashby (along with various guests). Though the episodes make me thirsty for a pint of cider, the casual but insightful conversations inspire me to learn and try different ideas. One recent episode was about being a valued team member. I was struck by their observation that one needs confidence to be an effective communicator. If you are confident in your skills and experience, you can go talk to anyone on the team to ask questions, to raise and investigate issues.
That was a real aha moment for me. As testers we talk a lot about learning about our software to gain confidence in what we’re going to deliver. But I hadn’t thought about the value of being confident in yourself. Janet Gregory and I have been presenting conference sessions about whether a tester needs programming skills to be to be useful. Our view is that technical awareness helps testers communicate with programmers because it gives them a shared language. But after hearing the Testing in the Pub podcast, I think that learning some technical skills also builds confidence.
Looking back on my own career, I remember that when I faced a tough challenge, such as learning a new tool, I felt confident that somehow I would succeed. I started out as a programmer/analyst, and though it’s been a long time since I spent a significant amount of my time coding, I feel confident in conversations with programmers. I’ve learned a bunch of difficult domains, so I believe I can learn any new domain quickly. That makes me confident in approaching business experts and learning from them.
Janet and I have six confidence-building practices to succeed with agile testing (see below). What are the confidence-building practices to succeed in adding value as a tester? Just off the top of my head:
- Make time to learn. Set goals, make a personal kanban board or other organizing tool so that you’ll work on them, use pomodoros or some similar technique to pace yourself.
- Public speaking is scary, but builds confidence. Helping others learn means you learn too. Volunteer to share your unique experiences at a local meetup.
- Put a bowl of chocolates on your worktop and invite teammates to come help themselves. It gives you a chance to chat informally and get to know them better.
What ideas do you have for building your own confidence as a tester and team member?
ITSO Limited is a certifying organization with the goal of making all electronic, or “smart”, ticketing systems interoperable throughout Great Britain, regardless of operator or mode of transportation.
With billions of transactions each year taking place on a variety of point of service terminals, ITSO Limited needed a streamlined, secure, and seamless system for certifying ticketing system hardware, software, and security.
Smart ticketing was developed to make travel on public transportation more convenient by streamlining ticket purchases and use. Where a cash process is plagued with queues, disgruntled travelers, and is costly to manage, smart ticketing frees travelers from those queues, creates a simpler, automated payment process, and allows bus, train, and tube operators to invest in growing their businesses.
However, because smart ticketing evolved regionally, with each area developing often-proprietary technologies, a particular challenge arose: was it possible for travelers to cross regional borders with a single smart ticket? Enter ITSO Limited, a nonprofit distributing organization.
ITSO Limited tests both existing and new ticketing machines and barriers for compliance with a single, national Crown Copyright Specification. They also profile and manage the electronic data access keys in the equipment to ensure they are all coded to interact properly. ITSO Limited essentially acts as the “keeper of the keys” for members’ systems.TestTrack Relieves Growing Pains
By 2014, ITSO’s membership had grown to more than 100 transport operators, governmental bodies, and manufacturers operating more than 60,000 point of service terminals. ITSO Limited realized they had outgrown their homegrown test management system—data was siloed, difficult to search, and limited to a single user at a time. With a goal of making all public transportation systems across the UK ITSO-compliant, it was essential they figure out how to expand their test management capabilities.
ITSO Limited needed a single software solution that would centralize their disparate systems, facilitate access to the entire test case lifecycle from execution through reporting, and would also ease their growing pains.The ITSO certification and testing services team compared dozens of potential software solutions, eventually narrowing the list to Seapine Software’s TestTrack.
What sealed the deal? Find out in the customer story.
The post ITSO Limited Expands Test Management Capabilities with Seapine Software appeared first on Blog.
More than 300 medical device industry professionals have responded to the 2015 State of Medical Device Development Survey so far, with one more month to go.
July’s two random gift card winners are Renee S. and Erica I. Congratulations! We’ll be emailing you each a $25 Amazon gift card soon!
A new question on this year’s survey is, “In your opinion, what needs to occur to foster more innovation in the medical device industry?”
We received a wide array of answers to this question, but two common concerns were government involvement (29%) and funding (11%). Here are a few sample comments:
“If regulatory bodies were more clear and specific about what they needed for clearance, companies could be more innovative with how to meet the requirements without as much fear of the unknown response.”
“Improvement in regulatory approvals in the U.S., so that capital is faster flowing.”
“Greater openness to use of automated tools. Less reliance on paper.”
“Harmonization of regulatory and QMS requirements.”
“Less regulation. More investment.”
“Funding sufficient to do the job right (too many constraints on resources exist).”
What’s your opinion? Share it in the 2015 Medical Device Development Survey today.
The post 2015 State of Medical Device Development Survey: July Winners appeared first on Blog.
As you may have seen we are doing a UTEST4STARWEST contest this month. With the contest closing soon now is the time to enter your submissions for a chance to win a sponsored trip to the StarWest conference this fall! One of the many reasons STARWEST is a great opportunity for those in the testing community […]
A little over a year ago, we announced "IBM UrbanCode Deploy with Patterns" which extended our release automation capabilities down the stack. With it, clients could easily spin up and update full-stack application environments (from compute-network-storage layers to application configuration). We call this model a "cloud blueprint" or "pattern". Since then, what we have heard from the market is clear: this full stack approach is the future of application release automation.
Now, that left us with a problem. Our core UrbanCode Deploy offering (like its competitors) would be an incomplete solution for the new normal. Well, problem solved.
Today, we're excited to announce that we are including the cloud blueprint capability in UrbanCode Deploy. Existing customers can just download version 6.1.2, upgrade, and have the capability today.
So what new capabilities do UCD customers get today?
- A rich, graphical editor for OpenStack Heat that makes defining new environments easy and fast.
- Full-stack management of applications, being able to promote infrastructure changes with code changes
- The ability to provision or update cloud environments in SoftLayer, Amazon, VMWare and OpenStack compatible clouds.
- Access to sandbox testing environments that are quick to create and destroy.
How does it feel to use these cloud blueprints? It feels like the cloud actually helping our developers deliver quicker. So grab the new bits and start taking advantage!
This blog post will show how to use the image validation enhancements for your test automation projects introduced in Ranorex 5.4. As you can read in the news post here, it is now possible to log the similarity as well as the difference image of two compared images.
For image validation there are many purposes in test automation. You can for example add a test case to your recurring automated website tests checking whether the correct logo is displayed in the header area.
This example will be used to show how to work with the new feature.
- The Initial Situation
- Setup of the Ranorex Project
- Enabling Enhanced Reporting Features
- Results of the Test Run
This is the correct header logo:
And this is another version of the logo:
As you can see, especially the shadings of the Ranorex symbol are different. The goal is to log the exact variance of the images in the report of the test case. Additionally, screenshots with details about the areas which do not resemble should be logged, too.Setup of the Ranorex Project
In the SETUP part of the test case the browser is opened. To make sure the header image is visible, the EnsureVisible() Invoke Action is called.
In TEARDOWN the browser window used for the comparison is closed.
This is the CheckHeaderImages recording:
It consists of one action: A CompareImage Validation.
In the “Screenshot Name” section the screenshot showing the differing image is chosen.
The “Repository Item” links to the repository item to validate. In this case it is the new (correct) header image.
Note: On the Ranorex website the image is actually a link. However, this is not a must-have. Every kind of repository item can be taken because a screenshot of the item is used.
Note: The size of the screenshot and the repository item need to be equal. Otherwise the validation will fail.Enabling Enhanced Reporting Features
By default, no similarity degree is reported. This can be enabled in the properties pane. A left mouse button click on the validation action or the hot key <F4> opens the properties (by default the properties pane opens on the right-hand side of the window).
To log the exact similarity of the two pictures in the report, the “Report Similarity” option needs to be enabled. Setting it to “Always” means it will be reported for every test run, regardless of the result of the validation.
If the option “Report Difference Images” is enabled, images showing the actual differences in the actual and expected images will be logged. In this particular case it is changed to “OnFail” which means the images will only be reported in case of a failed validation.Results of the Test Run
Let’s analyze the logs step-by-step.
Firstly, there is an error message saying the validation failed. The reason is that the screenshot of the repository item did not match the specified screenshot of the differing header image.
The error message now also contains the exact similarity of the two images (within brackets). A similarity of “1”, meaning 100 percent, is expected, whereas the actual similarity only reached approximately 99.39 percent.
The next two log entries contain the two images that were compared, as before Ranorex 5.4.
The new difference images feature can be seen in the final log message.
The left image shows the difference of both images in binary notation (black-white image): Black means the compared images are exactly the same in that area; a White pixel signals that the compared images differ at that spot.
The right image visualizes the quantitative difference between the compared images. The colors are computed by subtracting the color values of the two images and applying that difference to a grey image. The greater the color difference in the compared images is, the more colored the difference image will be, whereas Grey areas signal no difference at all.
The difference images can show you, where the actual differences are located.Conclusion
In this blog post you learned how to use some of the more advanced features of the Ranorex Image Validation. The basic concept of the image validation stays the same, but by changing the properties of the action a lot of different logging options can be enabled.
Reporting the similarity of two images can often be useful, e.g. for fine tuning the similarity value within an image validation.
If you have any questions, feel free to ask in the comment section.
Our network of .NET developers reaches far and wide with a broad range of interests. We love taking time to get to know more about those helping enrich our community and making it better. Please join us in getting to know two .NET developers making great strides for us all.Frank Mao
Frank Mao wrote his first commercial program in the late 90s on Turbo Basic. He doesn’t really care which language he’s using – PHP, Ruby, Python, C#, PowerBuilder, etc. – he just can’t code without testing. And what’s testing without code coverage?
With over 20 years of experience in tech, Frank has spent the majority of his time as a software developer and systems analyst. Currently, he is consulting and freelancing with Mazoic Technologies building iOS and Android mobile apps, with primary focus on geo-location based map apps, barcode scanner business apps, blue tooth communication based apps and other CMS front-end apps.
Two decades in the industry has taught Peter Ravnholt that using common methods and practices mixed in with a little flexibility delivers a robust yet simple result. Working primarily on the .NET platform, his contributions have included creating software prototypes, architecting and visualizing new concepts, ideas and products.
As an experienced software architect, developer, and team lead, Peter likes to focus on keeping the customer happy and the team growing. When he’s not coding, you can find Peter listening to and playing music, and searching for the perfect cup of coffee.
There's only a month left until JUC U.S. West on September 2-3! If you're still on the fence, check out the recaps of JUC Europe talks recently posted to the CloudBees blog. These should give you an idea about the kinds of talks you can expect at a Jenkins User Conference:
- How to Optimize Automated Testing with Everyone's Favorite Butler
- Configuration as Code - The Job DSL Plugin
- From DevOps to NoOps
If you're interested in the upcoming Jenkins UI overhaul, make sure to attend Gus and Tom's talk about it. Don't want to wait until JUC to learn more about this? Follow the discussion on the developers mailing list and contribute through early testing.
This JUC will again have an Ask The Experts booth with several Jenkins experts and developers available there throughout the event. If you want to discuss Workflow with Jesse, or pitch your UI ideas to Gus, this is where you'll be able to do that.
Conflict, negotiation and difficult conversations are hard, but there are plenty of good books help. I often recommend Crucial Confrontations, Getting to Yes and Getting Past No. Someone recommended Difficult Conversations, a book that I recently finished reading.Difficult Conversations
Where the other books I read tended to take a more mechanistic view of steering the conversation, I really appreciated the slightly different take with this book, which I felt more humanistic because it acknowledged the emotional side to difficult conversations. The authors suggest that when we have a difficult conversation, we experience three simultaneous conversations:
- The “What Happened” Conversation
- The Feelings Conversation
- The Identity Conversation
We often assume we know what happened, because we know what we know (Our Story). The authors (rightly) point out, that our story may be completely different from the other person (Their Story). A good practical tip is to focus on building the Third Story as a way of building a shared awareness and appreciation of other data that may make a difference to the conversationThe Feelings Conversation
As much as we like to think we are logical, we are highly emotional and biased people. It’s what makes us human. We manifest this by saying things based on how we are feeling. Sometimes we don’t even know this is happening. The book helps us understand and gives us strategies for uncovering the feelings that we may be experiencing during the conversation. They also suggest building empathy with the other person by walking through the Feelings Conversation the other person will be having as well.The Identity Conversation
I think this was the first time that I had thought about when we struggle to communicate, or agree on something, we may be doing so because have difficulty accepting something we may not like, or something that threatens our identity. This is what the authors call out as the Identity Conversation and is a natural part of successfully navigating a difficult conversation.Conclusion
I found Difficult Conversations a really enjoyable read that added a few new perspectives to my toolkit. I appreciate their practical advice such as stepping through each of the three conversations from both your and the other person’s perspective and avoiding speaking in different modes. I like the fact that they address the emotional side to difficult conversations and give concrete ways of understanding and coping with them, instead of ignoring them or pushing them aside.
The other day, I said I was reading Surely You Must Be Joking, Mr Feynman! by Richard Feynman and was captivated by it. I've finished it now, and I've pulled out a handful of quotes.
I love this on bad (or as he puts it, cargo cult) science and how strongly it relates to the way I want to perform and report testing:
But there is one feature I notice that is generally missing in cargo cult science ... It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty - a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid - not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked - to make sure the other fellow can tell they have been eliminated. Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can - if you know anything at all wrong, or possibly wrong - to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition. In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.And I love this on drawing (one of many skills he acquires in the book, including speaking Portuguese, safe-cracking and drumming) where I'm thinking of the parallel it has with the way in which in testing we seek to bring together art, craft, intuition and science, theory and practice:
I noticed that the teacher didn't tell people much ... Instead, he tried to inspire us to experiment with new approaches. I thought of how we teach physics: We have so many techniques ... that we never stop telling the students how to do things. On the other hand, the drawing teacher is afraid to tell you anything. If your lines are very heavy, the teacher can't say, "Your lines are too heavy," because some artist has figured out a way of making great pictures using heavy lines. The teacher doesn't want to push you in some particular direction. So the drawing teacher has this problem of communicating how to draw by osmosis and not by instruction, while the physics teacher has the problem of always teaching techniques, rather than the spirit, of how to go about solving physical problems. They were always telling me to "loosen up," to become more relaxed about drawing. I figured that made no more sense than telling someone who's just learning to drive to "loosen up" at the wheel. It isn't going to work. Only after you know how to do it carefully can you begin to loosen up ... One exercise they had invented for loosening us up was to draw without looking at the paper ... I found that my drawing had a kind of strength ... which appealed to me. The reason I felt good about that drawing was, I knew it was impossible to draw well that way, and therefore it didn't have to be good - and that's really what the loosening up was all about. I had thought that "loosen up" meant "make sloppy drawings," but it really meant to relax and not worry about how the drawing is going to come out.And then this on the, ahem, danger of analogy:
Another time somebody gave a talk about poetry. He talked about the structure of the poem and the emotions that come with it; he divided everything up into certain kinds of classes. [...] Dr. Eisenhart ... said, "I'd like to know what Dick Feynman thinks about it in reference to theoretical physics." I got up and said, "Yes, it's very closely related. In theoretical physics, the analog of the word is the mathematical formula, the analog of the structure of the poem is the interrelationship of the theoretical bling-bling with the so-and-so" -- and I went through the whole thing, making a perfect analogy. The speaker's eyes were beaming with happiness. Then I said, "It seems to me that no matter what you say about poetry, I could find a way of making up an analog with any subject, just as I did for theoretical physics. I don't consider such analogs meaningful."
I've been reading Surely You're Joking Mr Feynman! by Richard Feynman and I'm captivated the eyes-open way his anecdotes relate how he notices things, how he feels about things, how he feels about how he feels about things, what his interest in things is, and why, and how he is constantly motivated to experiment and learn and understand, and then share his understanding.
Image: Google Books
For Mario, NoOps is not about the elimination of Ops, it is the automation of manual processes, being the end state of adopting a DevOps culture, or, quoting Forrester, a DevOps focus on collaboration evolves into a NoOps focus on automation. At Choose Digital, the developers own the complete process, from writing code through production deployment. By using AWS Elastic Beanstalk and Docker they can scale up and down automatically. Docker and containers are the best thing to adopt DevOps, enabling running the same artifact in your machine and in production.
Mario mentioned that Jenkins is a game changer for continuous build, deploy, testing and closing the feedback loop. They use DEV@Cloud because of the same reason they use AWS, it is not their core business, and prefer to use services from companies with the expertise to run anything not core to the business. On their journey to adopt Docker they developed several Docker related plugins that they are discarding for the ones recently announced by CloudBees, like the Traceability plugin, a very important feature for auditing and compliance.
About deployment, Choose Digital uses Blue-Green deployment, creating a new environment and updating Route53 CNAMEs when the new deployment passes some tests ran by Jenkins, and even running Netflix Chaos Monkey. With Beanstalk swap environment urls both old and new deployments can be running at the same time, and reverting a broken deployment is just a matter of switching the CNAME back to the previous url without needing a new deployment. The old environments are kept around 2 days to account for caching and ensure all users are running in the new environment.
Only parts of the stack are replaced because doing it in the whole stack at peak time takes around 34 minutes, so only small parts on the AWS Elastic Beanstalk stack are deployed, in order to do it faster and more often. For some complex cases, such as database migrations, features are turned off by default and turned on at low traffic hours.
After deployment, logs and metrics are important, for example using NewRelic has proven very helpful to understand performance issues. Using these metrics the deployments are scaled automatically from around 25 to 250 servers at peak time.
We hope you enjoyed JUC Europe!Here is the abstract and link to the video recording of his talk.
If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.
Two-for-one special Gus and Tom presented the Jenkins UI from the Paleolithic past to the shining future. Clearly comfortable with their material, they mixed jokes with demos and some serious technical meat. They spoke with candor and frankness about the current limits of the UI and how CloudBees, Inc. is working with the community to overcome them.
Tom took a divisive approach, specifically dividing monolithic CSS, JS, and page structure into clean, modular elements. “LESS is more” was a key point, using LESS to divide CSS into separate imports and parameterize it. He also explained work to put a healthy separation in the previously sticky relationship between plugin functionality and front-end code.
Tom showed off a completely new themes engine built upon these changes. This offers each installation and user the ability to customize the Jenkins experience to their personal aesthetics or improve accessibility, such as for the visually impaired. Gus brought a vision for a clean, dynamic UI offering a streamlined user interface. His goal was to aim for “third level” changes which enable completely new uses. For example, views that can become reports. Also he announced a move towards scalable layouts for mobile use, so “I know if I need to come back early [from lunch] because my build is broken or if I can have a beer over lunch.”
Radical change comes with risk, and to balance this, Gus repeatedly solicit community feedback to see if changes work well. Half-seriously, he mentioned previously going as far as giving out his mother’s phone number to make it easy for people to reach out.
Wrapping up, questions showed that while the new UI changes aren’t ready yet, CloudBees, Inc. is actively engaging with the community to shape the new look and feel of Jenkins, and the future is promising!
We hope you enjoyed JUC Europe!Here is the abstract for Tom and Gus's talk, "Evolving the Jenkins UI." Here are the slides for their talk and here is the video.
If you would like to attend JUC, there is one date left! Register for JUC U.S. West, September 2-3.