HTTP/1.1 200 Connection established
HTTP/1.1 200 OK
Date: Wed, 16 Oct 2024 20:15:32 GMT
Server: Apache/2.4.6 (CentOS) mpm-itk/2.4.7-04 OpenSSL/1.0.2k-fips PHP/7.4.33
X-Powered-By: PHP/7.4.33
X-LiteSpeed-Tag: f45_HTTP.200
Last-Modified: Sun, 14 Jul 2024 22:08:31 GMT
ETag: "52b6faace379161dba87ef01b51709d8"
Link:
Сообщение Betting Strategies: A Comprehensive Guide появились сначала на caitiem.com.
]]>Value betting involves identifying bets where the odds offered by the bookmaker are higher than the actual probability of the event occurring. This requires thorough research and a good understanding of the sport you are betting on. To find value bets, compare the odds from multiple bookmakers and look for discrepancies.
Example: If you believe a team has a 60% chance of winning, but the bookmaker’s odds imply a 50% chance, this is a value bet.
Matched betting is a risk-free strategy that takes advantage of free bet promotions offered by bookmakers. By placing two opposite bets (one with the bookmaker and one on a betting exchange), you can guarantee a profit regardless of the outcome.
Example: If a bookmaker offers a $50 free bet, you can place a bet on Team A to win with the free bet and a lay bet on Team A to lose on a betting exchange.
Arbitrage betting involves placing bets on all possible outcomes of an event across different bookmakers to guarantee a profit. This requires finding odds discrepancies between bookmakers and acting quickly before the odds change.
Example: If Bookmaker A offers odds of 2.10 on Team A to win and Bookmaker B offers odds of 2.10 on Team B to win, you can place bets on both outcomes and lock in a profit.
The Kelly Criterion is a mathematical formula used to determine the optimal size of your bet. It helps maximize your bankroll growth over time while minimizing the risk of ruin. The formula is:
Kelly Percentage=BP−QB\text{Kelly Percentage} = \frac{\text{BP} – \text{Q}}{\text{B}}Kelly Percentage=BBP−Q
Where:
Example: If you believe a bet with odds of 2.50 has a 50% chance of winning, the Kelly Percentage would be:
(2.50−1)×0.50−0.502.50−1=0.20\frac{(2.50 – 1) \times 0.50 – 0.50}{2.50 – 1} = 0.202.50−1(2.50−1)×0.50−0.50=0.20
This means you should bet 20% of your bankroll.
The Martingale system involves doubling your bet after each loss until you win. The idea is that a win will recover all previous losses plus a profit equal to the original stake. This strategy is risky as it requires a large bankroll and can lead to significant losses during a losing streak.
Example: If you start with a $10 bet and lose, your next bet should be $20, then $40, and so on until you win.
The Fibonacci betting system is based on the Fibonacci sequence, where each number is the sum of the two preceding ones (1, 1, 2, 3, 5, 8, 13, …). In this system, you increase your bet according to the sequence after a loss and move back two steps in the sequence after a win.
Example: If you start with a $10 bet and lose, your next bets would be $10, $20, $30, $50, and so on.
Betting strategies can significantly enhance your chances of success, but no strategy guarantees a win every time. It is essential to do your research, stay disciplined, and manage your bankroll effectively. Remember, betting should be fun and done responsibly. Use these strategies as tools to help you make more informed decisions and enjoy the experience.
Сообщение Betting Strategies: A Comprehensive Guide появились сначала на caitiem.com.
]]>Сообщение The Essentials of Betting появились сначала на caitiem.com.
]]>In this comprehensive guide, we will delve into the essentials of betting, from understanding the key concepts and terminologies to learning the fundamental principles for successful betting. Whether you’re a beginner looking to dip your toes into the world of betting or a seasoned bettor aiming to enhance your strategies, this guide will provide valuable insights to help you navigate the exciting world of betting.
Before diving into the world of betting, it’s essential to familiarize yourself with the key concepts and terminologies commonly used in the industry. By understanding these terms, you’ll be able to navigate through the various aspects of betting more effectively. Here are some of the fundamental concepts to grasp:
Understanding these key concepts and terminologies will lay a solid foundation for your betting journey. By grasping these fundamentals, you’ll be able to interpret odds, manage your bankroll effectively, and make informed betting decisions.
While betting is primarily an activity of chance, there are fundamental principles that can increase your chances of success and enhance your overall betting experience. Here are some key principles to keep in mind:
By adhering to these fundamental principles of successful betting, you can elevate your betting experience and potentially increase your chances of achieving positive outcomes. Remember, betting should be regarded as a form of entertainment and should be enjoyed responsibly.
In this comprehensive guide, we have explored the essentials of betting, providing insights into the key concepts and terminologies as well as the fundamental principles for successful betting. By understanding odds, stake management, and the role of bookmakers, you’ll be equipped with the necessary knowledge to navigate the world of betting more effectively.
Furthermore, by conducting thorough research, practicing discipline in bankroll management, and understanding the concept of value, you can increase your chances of making successful bets. Remember to approach betting with a rational mindset, utilize available betting tools, and enjoy the experience responsibly.
While betting can offer excitement and the opportunity to make watching sports events more interesting, it’s important to remember that it should never be viewed as a guaranteed way to get rich or win big. Instead, embrace betting as a form of entertainment and exercise caution to ensure a positive and enjoyable experience.
Сообщение The Essentials of Betting появились сначала на caitiem.com.
]]>Сообщение The Ultimate Guide to Sports Betting Books: Boost Your Odds with Expert Insights появились сначала на caitiem.com.
]]>These books are written by experts who have spent years studying and analyzing various sports and betting strategies. They provide valuable insights, proven strategies, and expert advice to help bettors enhance their chances of winning. In this ultimate guide, we will explore the best sports betting books that can boost your odds and sharpen your skills.
When it comes to sports betting, knowledge is power. The more you understand about the sports you are betting on and the strategies behind successful bets, the better your chances of winning. Here are some of the best sports betting books that can help you gain that knowledge and expertise:
“Sharp Sports Betting” is a classic in the world of sports betting literature. Written by Stanford Wong, a renowned expert in the field, this book covers the basics of sports betting, including bankroll management, line shopping, and understanding odds. It provides a solid foundation for beginners and serves as a refresher for experienced bettors.
In “The Smart Money,” Michael Konik takes readers inside the world of professional sports bettors. He shares the stories of individuals who have made millions through sports betting and provides insights into their strategies and mindset. This book offers a unique perspective and valuable lessons for both beginners and experienced bettors.
“Weighing the Odds in Sports Betting” focuses on the statistical and mathematical aspects of sports betting. King Yao explores various betting systems, analyzes historical data, and provides practical advice on finding value in sports betting markets. This book is ideal for those who prefer a more analytical approach to betting.
The best sports betting books offer valuable insights, strategies, and expert advice that can enhance your chances of winning. Whether you’re a beginner or an experienced bettor, these books provide a wealth of knowledge that can help you make more informed and successful bets.
Sports betting is more than just placing bets on your favorite teams. It requires a thorough understanding of various factors that can influence the outcome of a match. Here are some must-read books that unlock the secrets of successful sports betting:
In “Trading Bases,” Joe Peta shares his journey from Wall Street to becoming a successful sports bettor. He explores the parallels between stock trading and sports betting, highlighting the importance of data analysis and market trends. This book is a fascinating read for anyone interested in the intersection of finance and sports.
“The Signal and the Noise” is not specifically about sports betting, but it offers valuable insights into the world of predictions and forecasting. Nate Silver, a renowned statistician and analyst, explains the principles of probabilistic thinking and how it can be applied to sports betting. This book is a thought-provoking read for anyone looking to improve their predictive skills.
While mentioned earlier as a great beginner’s guide, “Sharp Sports Betting” also deserves a spot in this section. It provides a comprehensive overview of various betting strategies, including line shopping, understanding odds, and exploiting market inefficiencies. This book is a must-read for any sports betting enthusiast.
Conclusion: Unlocking the secrets of sports betting requires a deep understanding of various factors and the ability to analyze data and make informed predictions. These must-read books offer invaluable insights and strategies that can take your sports betting game to the next level.
Whether you’re a novice looking to learn the basics of sports betting or a seasoned bettor aiming to further improve your skills, these essential books can help you on your journey:
The Complete Guide to Sports Betting is an excellent resource for beginners. It covers everything from understanding odds and bankroll management to analyzing teams and making informed bets. This book provides a solid foundation for those new to sports betting.
As mentioned earlier, Sharp Sports Betting is a must-read for anyone interested in sports betting. It offers valuable insights into various strategies and provides the tools and knowledge to make more successful bets. This book is perfect for those looking to take their skills to the next level.
The Logic of Sports Betting delves into the fundamental principles that underpin successful sports betting. It explores topics such as bankroll management, value betting, and exploiting market inefficiencies. This book is recommended for intermediate and advanced bettors looking to refine their skills.
From novice to pro, these essential sports betting books provide the knowledge and strategies needed to sharpen your skills. Whether you’re just starting out or looking to take your betting game to new heights, these books offer invaluable insights and guidance.
Sports betting can be an exciting and potentially profitable activity, but it requires knowledge and skill to consistently make successful bets. The best way to improve your odds and enhance your chances of winning is to educate yourself through sports betting books written by experts.
By reading and studying these books, you can gain valuable insights, learn proven strategies, and develop the skills needed to make informed and successful bets. From understanding the basics to unlocking the secrets and sharpening your skills, these books cover a wide range of topics to cater to bettors of all levels.
Remember, sports betting should be viewed as a form of entertainment and not a guaranteed way to get rich. It’s about making watching sports events more interesting and adding an extra layer of excitement. So, grab a book, dive into the world of sports betting, and enjoy the process of becoming a more knowledgeable and successful bettor.
Сообщение The Ultimate Guide to Sports Betting Books: Boost Your Odds with Expert Insights появились сначала на caitiem.com.
]]>Сообщение Leading Betting Apps: Unveiling the Top 3 Choices появились сначала на caitiem.com.
]]>In this article, we will dive into the top three leading betting apps available in the UK/US market. These apps provide a seamless and user-friendly betting experience, ensuring that users can enjoy the thrill of gambling while taking advantage of the convenience offered by modern technology.
In this section, we will explore the top three betting apps that have gained immense popularity among UK/US users. These apps have been selected based on their features, user interface, reliability, and overall betting experience. Let’s take a closer look at each app:
In this section, we will delve deeper into each of the top three leading betting apps, providing a comprehensive review of their features, pros, and cons.
Bet365 is a powerhouse in the betting industry, providing users with an extensive range of sports and events to bet on. The app’s user-friendly interface allows users to navigate through different markets effortlessly. Bet365 also offers live streaming of various sports events, ensuring that users can watch the action unfold while placing their bets. The app provides competitive odds, giving users the best chance of maximizing their winnings. However, some users have reported occasional delays in the app’s live streaming feature.
William Hill is a well-established name in the UK/US betting market, known for its reliability and excellent customer service. The app offers a wide range of sports and events, ensuring that users have ample options to choose from. In addition to sports betting, William Hill also provides access to a variety of casino games, adding extra excitement to the overall gambling experience. The app’s interface is straightforward to use, but some users have reported occasional lags when placing bets during peak hours.
Betfair stands out from the crowd with its unique betting exchange platform. This allows users to bet against each other, providing better odds and the opportunity to trade bets. The app offers a wide range of sports and events, with live streaming options and in-play betting. Betfair’s interface is sleek and visually appealing, making it a favorite among users who appreciate modern design. However, the peer-to-peer betting system may be confusing for some novice bettors, and the app can feel overwhelming initially.
In conclusion, the top three leading betting apps in the UK/US market offer a range of features and options to cater to different user preferences. Bet365, William Hill, and Betfair are all reliable and reputable choices that provide an enjoyable and immersive betting experience. Whether you are a seasoned bettor or a novice looking to dip your toes into the world of gambling, these apps will undoubtedly enhance your betting journey. So, download your preferred app and get ready to make watching sports events even more exciting.
Сообщение Leading Betting Apps: Unveiling the Top 3 Choices появились сначала на caitiem.com.
]]>Сообщение 2017 a Year in Review появились сначала на caitiem.com.
]]>Change can be terrifying, especially when you are comfortable, when you are content. Nothing was terribly wrong, but I got the nagging feeling that perhaps nothing was going terribly right either. I was no longer content with being content. So in 2017 I began to change some things up to make space for new opportunities.
That coding life pic.twitter.com/SW6KoTazDv
— Caitie McCaffrey (@caitie) September 13, 2017
I made a conscious effort in 2017 to be less busy, to travel and speak a bit less. 2016 was a year of constant travel visiting 19 cities, 7 countries, and 3 continents. I visited Twitter offices, spoke 15 times at conferences and meetups, and managed to squeeze in trips to see family and friends. It was an amazing experience, but not a sustainable one for me.
So I made a conscious effort to slow down and was incredibly selective about the talks and travel I took on. I declined several opportunities to speak and travel to great conferences and locations this year. I wanted to take a moment to thank all the conference organizers who reached out, I greatly appreciate all of the invitations and fantastic opportunities and unfortunately did not have the bandwidth to do more this past year.
I gave versions of my The Verification of Distributed Systems talk to larger audiences at Devoxx San Jose in March, and Velocity San Jose in June. While I’ve given this talk numerous times, I think it’s perennially important, and people consistently tell me how much they learn from it.
I wrote a brand new talk Distributed Sagas: A Protocol for Coordinating Microservices which I gave at J on the Beach in May and at Dot Net Fringe in June. This was a passion project for me, as I’d been exploring the ideas for multiple years, and wanted to share the progress I had made.
I also wrote another new talk for the inaugural Deconstruct Conf in Seattle, The Path Towards Simplifying Consistency in Distributed Systems. This conference was my favorite of the year. A single track filled with excellent speakers that focused not only on technology, but the culture and community in tech. The cherry on top was its location The Egyptian theater in Seattle’s Capitol Hill neighborhood, my old stomping grounds.
I also spoke at two chapters of Papers We Love, San Francisco and Seattle. I presented Barbra Liskov’s paper Distributed Programming in Argus. This brings my total times speaking at Papers We Love chapters to 7, which I think once again makes me the record holder :). All joking aside Papers We Love is one of my favorite organizations and I love attending and speaking at the meetups because of the community it fosters bringing together academia and industry and the culture of curiosity it inspires.
I wrote a single blog post in 2017. Resources for Getting Started with Distributed Systems which is a collection of materials that have greatly influenced me, and attempts to answer the perennial question I get asked “How do I get started with Distributed Systems.”
Earlier this year an old colleague recommended I take a phone call with a group at Microsoft Research. After a couple phone calls, and an onsite interview, I was convinced that this was a rare opportunity with an amazing team and an industry defining project. So in June, after 2.5 years of working at Twitter, I decided to leave the flock.
Working at Twitter was a truly great experience. It was an incredible ride where I got to learn and work on so many amazing projects including being the Tech Lead of the Observability team, speaking at Twitter Flight, digging into Distributed Build, shipping Abuse Report Notifications, and facilitating TWIG (Twitter’s Engineering Leadership Program). I also feel very fortunate to have worked with and met so many incredible people.
Today is my last day at Twitter. What an incredible ride the last 2.5 years, so grateful for this experience & the folks I met along the way pic.twitter.com/pWKQTd27yo
In July I started as a Principal Software Engineer in Microsoft Research, and have loved every minute of it. I’m getting to stretch, learn, and grow every day on a project that I truly believe will change the world. I also adore my teammates, this is by far the smartest and nicest team I have ever worked on. We consistently talk and live our cultural values of trust, kindness, and fearlessness. I couldn’t ask for a better team. And just incase that wasn’t enough change for one year in November I stepped into the Lead role, a hybrid Tech Lead and People Manager, for the Service’s Team, which is another new exciting challenge and opportunity that I’m loving.
Leaving San Francisco felt inevitable. I moved to San Francisco to experience the tech scene, to live the cultural phenomenon. But after 2.5 years I was ready to move on. San Francisco was not my forever home, our words just did not match.
Moving back to Seattle was an easy decision. I first fell in love with Seattle when I moved here after college, and still love it. Even after all my nomadic wanderings and travel when I visited Seattle in April for Deconstruct Conf I instantly felt like I was home. I also realized I was quite nostalgic for Seattle earlier in the year when I began marathoning episodes of Grey’s Anatomy again.
And if all the warm and fuzzy feelings about Seattle weren’t enough, the stars magically aligned and within a week of moving back I made an offer on a house, and it was accepted! New job, new/old city, and a new homeowner too!
I jokingly tell friends that I blew up my whole life earlier this year, which isn’t entirely untrue. The top three stressors in life are commonly reported as job change, relationship change, and moving. I did all three within the span of about two months. I’d like to take quick moment to thank my community of family, friends, and colleagues who helped and supported me through this whirlwind transition. I could not have done it without your support.
Even with all the stressors I honestly could not be happier (with my personal and professional life, the political nightmare of 2017 still fills me with dread, despair, and anger). I no longer feel comfortable or content. In fact I often feel decidedly uncomfortable, but in the way that signals learning and growth. And instead of contentment I often feel a wild unbridled joy and excitement. I’m energized to go to work every day. I’ve sang and danced and laughed until my stomach hurts more times than I can count since blowing up my life. So I guess the lesson once again is, “You are braver than you believe, stronger than you seem, and smarter than you think.” Oh and always take the phone call :).
Сообщение 2017 a Year in Review появились сначала на caitiem.com.
]]>Сообщение A Quick Guide to Testing in Golang появились сначала на caitiem.com.
]]>So let’s start by writing a basic function FizzBuzz, which takes in a number and returns a string according to the following rules.
For multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.
Here is my version (fizzbuzz.go), pretty simple right? Now that we’ve written our code, we need to test it.
Basic testing in Go is easy, and well documented. Go test cases are usually placed in the same directory as the code they are testing and typically named <filename>_test.go, where filename is the name of the file with the code under test.
There are four basic outputs we expect from FizzBuzz: Fizz, Buzz, FizzBuzz, and the input number. These can all be covered by 4 basic test cases that I wrote in fizzbuzz_test.go which provide the input 3, 5, 15, and 2 to the fizzBuzz function and validate the result.
go test -v -race ./…
-v prints out verbose test results. This will show a pass fail for every test case ran.
-race runs the Golang race detector, which will detect when two goroutines access the same variable concurrently and at least one of the accesses is a write.
Continuous Integration is crucial for fast & safe development. Using a tool like Travis CI or Circle CI, makes it easy for developers to ensure all submitted code compiles and passes test cases. I setup my project to run gated checkins using TravisCI, starting with the golang docs, and then adding some modifications. My .travis.yml file ensures the following:
Code Coverage is another important tool that I include in every project where possible. While no percentage of code coverage will prove that your code is correct, it does give you more information about what code has been exercised.
I personally use code coverage to check if error cases are handled appropriately. Anecdotally I find that code coverage gaps occur around error handling. Also in Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems, the authors discovered that the majority of catastrophic failures are caused by inappropriate error handling and that “In 23% of catastrophic failures … the incorrect error handling in these cases would be exposed by 100% statement coverage testing on the error handling logic.”
Testing and verifying distributed systems is hard but this paper demonstrates that rigorously testing the error handling logic in our program dramatically increase our confidence that the system is doing the right thing. This is a huge win. I highly recommend using Code Coverage in your Go projects.
There are a variety of Code Coverage Tools out there. I set up my repo to use CodeCov.io. It easily integrates with TravisCI and is free for public repos. CodeCov.yml is my projects configuration file, and testCoverage.sh is a script which runs all the tests in the project and creates coverage.txt file which is uploaded and parsed by CodeCov to create coverage reports.
Now we have 100% test coverage of the current implementation with our unit test cases, however we have only covered 9.31e-10%. That’s a very small percentage of all possible inputs to be validated. Assuming that the code was more complicated, or we had to test this in a black box manner, then our confidence that our code was doing the correct thing for all inputs would be low.
One way to explore more of the input state space is to use property based testing. In a property based test, the programmer specifies logical properties that a function should fulfill. The property testing framework then randomly generates input and tries to find a counterexample, i.e. a bug in the code. The canonical property testing framework is QuickCheck, which was written by John Hughes, it has since been re-implemented in numerous other languages including Go (Gopter is the GOlang Property TestER). While Property Based testing cannot prove that the code is correct, it greatly increases our confidence that the code is doing the right thing since a larger portion of the input state space is explored.
The docs for Gopter are rather extensive, and explain all the bells and whistles, so we shall just go through a quick example. Property based tests can be specified like any other test case, I placed mine in fizzbuzz_prop_test.go for this example, but typically I include them in the <filename>_test.go file.
properties.Property( "FizzBuzz Returns Correct String" , prop.ForAll( func(num int ) bool { str := fizzBuzz(num) switch str { case "Fizz" : return (num % 3 == 0) && !(num % 5 == 0) case "Buzz" : return (num % 5 == 0) && !(num % 3 == 0) case "FizzBuzz" : return (num % 3 == 0) && (num % 5 == 0) default : expectedStr := strconv.Itoa(num) return !(num % 3 == 0) && !(num % 5 == 0) && expectedStr == str } }, gen.Int(), )) |
This test passes the randomly generated number into fizzBuzz then for each case ascertains that the output adheres to the defined properties, i.e. if the returned value is “Fizz” then the number must be divisible by 3 and not by 5, etc… If any of these assertions do not hold a counter-example will be returned.
For instance say a zealous developer on the FizzBuzz project added an “!” to the end of the converted number string, the property based tests would fail with the following message: ! FizzBuzz Returns Correct String: Falsified after 3 passed tests. ARG_0: 11 ARG_0_ORIGINAL (31 shrinks): 406544657 Elapsed time: 200.588µs
Now we have a counter example and can easily reproduce the bug, fix it and move on with development.
Where Gopter & QuickCheck excel beyond random input and fuzz testing, is that they will try to shrink the input to cause the error to a minimum set of inputs. While our example only takes one input this is incredibly valuable for more complex tests.
I find Property Based testing incredibly valuable for exploring large state spaces of input, especially things like transformation functions. I regularly use them in addition to unit tests, and often find them just as easy to write.
go get github.com/leanovate/gopter
Install Gopter to get started with property based testing in Go.
The project scope has increased! Now we need to provide FizzBuzz as a service and/or command line tool. Now our FizzBuzz calculator may be long lived and can take advantage of caching results, that users have already requested.
In order to do this I added a new interface Cache, this allows the user to provide their favorite Cache of choice. That could be a simple in-memory cache backed by a dictionary or perhaps a durable cache like Redis, depending on their requirements.
type Cache interface { Put(key int , value string) Get(key int ) (string, bool ) } |
And a new file fizzBuzzHandler.go, with a method RunFizzBuzz, which takes an array of strings (presumably numbers) tries to convert them to integers, and then get the FizzBuzz value for them, either from the cache or by calculating FizzBuzz via our previously defined method.
Now we have new code that needs to be tested, so we create fizzBuzzHandler_test.go. Testing bad input is once again a simple unit test case. We can also simply test that the correct value of FizzBuzz is returned for a variety of supplied numbers when RunFizzBuzz is called, however, FizzBuzz returning the correct value has already been extensively tested above.
What we really want to test is the interaction with the Cache. Namely that values are stored in the cache after being calculated, and that they are retrieved from the cache and not re-calculated. Mocks are a great way to test that code interacts in the expected way, and to easily define inputs and outputs for calls.
Go has a package golang/mock. In Go only Interfaces can be Mock’d. Mocks in Go are implemented via codegen. The mockgen tool will generate an implementation of a mock based on your interface. Then in a unit test case, a mock interface object can be created, and expected method calls specified and return values defined.
func Test_RunFizzBuzz_CacheMiss(t *testing.T) { mockCtrl := gomock.NewController(t) defer mockCtrl.Finish() mockCache := NewMockCache(mockCtrl) mockCache.EXPECT().Get(5).Return( "" , false ) mockCache.EXPECT().Put(5, "Buzz" ) handler := NewHandler(mockCache) str, err := handler.RunFizzBuzz([]string{ "5" }) if err != nil { t.Error( "Unexpected error returned" , err) } if str[0] != "Buzz" { t.Error( "Expected returned value to be Buzz" , str) } } |
In the above code, I create a mockCache with the NewMockCache command, and define that I expect a Cache miss to occur, followed by a Put with the calculated value. I then simply call RunFizzBuzz and verify the output. This not only validates that the correct value is returned from RunFizzBuzz, but also that the cache was successfully updated.
Code Generated mocks should be checked into the code base, and updated when the interface changes as part of a code review.
go generate ./…
will run the code gen command specified in files with the comment: //go:generate <cmd>
For instance to generate cache_mock.go when running go generate./… the following comment is added at the top of the file. //go:generate mockgen -source=cache.go -package=fizzbuzz -destination=cache_mock.go
A fake is a test implementation of an interface, which can also be incredibly useful in testing, especially for integration tests or property based tests. Specifying all the expected calls on a mock in a property based test is tedious, and may not be possible in some scenarios. At these points Fakes can be very useful. I implemented cache_fake.go, a simple in-memory cache to use with fizzBuzzHandler_prop_test.go to ensure there is no unintended behavior when the cache is used with numerous requests.
Tests that utilize fakes can also easily be repurposed as integration or smoke-tests when an interface is used to abstract a network interaction, like with the FizzBuzz Cache. Running this test with the desired cache implementation can greatly increase our confidence that the interaction with the physical cache is correct, and that the environment is configured correctly.
The golang ecosystem provides numerous options for testing and validating code. These tools are free & easy to use. By using a combination of the above tools we can obtain a high degree of confidence that our system is doing the correct thing.
I’d love to hear what tools & testing setups you use, feel free to share on Twitter, or submit a pull request to the repo.
Сообщение A Quick Guide to Testing in Golang появились сначала на caitiem.com.
]]>Сообщение 2015: A Year in Review появились сначала на caitiem.com.
]]>Presented at Papers We Love SF: Video & Slides [February 19th 2015]
Caitie McCaffrey stops by and talks about the Orleans: Distributed Virtual Actors for Programmability and Scalability paper by Bernstein, Bykov, Geller, Kliot, and Thelin.
Orleans is a runtime and programming model for building scalable distributed systems, based on the actor model. The Orleans programming model introduces the abstraction of Virtual Actors. Orleans allows applications to obtain high performance, reliability, and scalability. This technology was developed by the eXtreme Computing Group at Microsoft Research and was a core component of the Azure Services that supported that powered Halo 4, the award winning video game.
Abstract
Halo 4 is a first-person shooter on the Xbox 360, with fast-paced, competitive gameplay. To complement the code on disc, a set of services were developed to store player statistics, display player presence information, deliver daily challenges, modify playlists, catch cheaters and more. As of June 2013 Halo 4 had 11.6 million players, who played 1.5 billion games, logging 270 million hours of gameplay.
Orleans, Distributed Virtual Actors for Programmability & Scalability, is an actor framework & runtime for building high scale distributed systems. It came from the eXtreme computing group in Microsoft Research, and is now Open Source on Github.
For Halo 4, 343 Industries built and deployed a new set of services built from the ground up to support high demand, low latency, and high availability using using Orleans and running in Window Azure. This talk will do an overview of Orleans, the challenges faced when building the Halo 4 services, and why the Actor Model and Orleans in particular were utilized to solve these problems.
Presented as the Closing Keynote of SRECon15: Video & Slides [March 17th 2015]
The Halo 4 services were built from the ground up to support high demand, low latency, and high availability. In addition, video games have unique load patterns where the majority of the traffic and sales occurs within the first few weeks after launch, making this a critical time period for the game and supporting services. Halo 4 went from 0 to 1 million users on day 1, and 4 million users within the first week.
This talk will discuss the architectural challenges faced when building these services and how they were solved using Windows Azure and Project Orleans. In addition, we’ll discuss the path to production, some of the difficulties faced, and the tooling and practices that made the launch successful.
Presented at Craft Conf 2015 & Goto: Chicago 2015 Video & Slides [April 23rd 2015 & May 12th 2015]
As we build larger more complex applications and solutions that need to do collaborative processing the traditional ACID transaction model using coordinated 2-phase commit is often no longer suitable. More frequently we have long lived transactions or must act upon resources distributed across various locations and trust boundaries. The Saga Pattern is a useful model for long lived activities and distributed transactions without coordination.
Sagas split work into a set of transactions whose effects can be reversed even after the work has been performed or committed. If a failure occurs compensating transactions are performed to rollback the work. So at its core the Saga is a failure Management Pattern, making it particularly applicable to distributed systems.
In this talk, I’ll discuss the fundamentals of the Saga Pattern, and how it can be applied to your systems. In addition we’ll discuss how the Halo 4 Services successfully made use of the Saga Pattern when processing game statistics, and how we implemented it in production.
Presented at StrangeLoop 2015 Video & Slides [September 25th 2015]
This talk was incredibly well received, and I was flattered to see write-ups of it featured in High Scalability and InfoQ
The Stateless Service design principle has become ubiquitous in the tech industry for creating horizontally scalable services. However our applications do have state, we just have moved all of it to caches and databases. Today as applications are becoming more data intensive and request latencies are expected to be incredibly low, we’d like the benefits of stateful services, like data locality and sticky consistency. In this talk I will address the benefits of stateful services, how to build them so that they scale, and discuss projects from Halo and Twitter of highly distributed and scalable services that implement these techniques successfully.d
Abstract
Every minute Twitter’s Observability stack processes 2+ billion metrics in order to provide Visibility into Twitter’s distributed microservices architecture. This talk will focus on some of the challenges associated with building and running this large scale distributed system. We will also focus on lessons learned and how to build services that scale that are applicable for services of any size.
Presented as the Evening Keynote at QconSF with Ines Sombra: Video, Slides, Resources, & Moment [November 16th 2015]
Surprisingly enough academic papers can be interesting and very relevant to the work we do as computer science practitioners. Papers come in many kinds/ areas of focus and sometimes finding the right one can be difficult. But when you do, it can radically change your perspective and introduce you to new ideas.
Distributed Systems has been an active area of research since the 1960s, and many of the problems we face today in our industry have already had solutions proposed, and have inspired new research. Join us for a guided tour of papers from past and present research that have reshaped the way we think about building large scale distributed systems.
Сообщение 2015: A Year in Review появились сначала на caitiem.com.
]]>Сообщение A WebSocket Primer появились сначала на caitiem.com.
]]>In December 2011 the IETF standardized the WebSocket protocol. Unlike the typical Request/Response messaging patterns provided by HTTP, this network protocol provides a full-duplex communication channel between a host and a client over TCP. This enables server sent events, reactive user experiences, and real time components.
The WebSocket protocol provides some advantages over the traditional HTTP protocol. Once the connection has been established, there is a point to point system of communication where both devices can communicate with one another simultaneously. This enables server sent events without using a work around like Comet or Long Polling. While these technologies work well, they carry the overhead of HTTP, whereas WebSocket frames have a wire-level overhead of as little as two bytes per frame. The full-duplex communication and low packet overhead make it an ideal protocol for real-time low latency experiences.
An important note: The WebSocket protocol is not layered on top of HTTP, nor is it an extension of the HTTP protocol. The WebSocket protocol is a light weight protocol layered onto of TCP. The only part HTTP plays is in establishing a WebSocket connection via the HTTP Upgrade request. Also the HTTP Upgrade request is not specific to WebSockets but can be used to support other hand-shakes or upgrade mechanisms which will use the underlying TCP connection.
A client can establish a WebSocket connection by initiating a client handshake request. As mentioned above the HTTP Upgrade request is used to initiate a WebSocket connection.
GET /chat HTTP/1.1
HOST: server.example.com
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
Origin: http://example.com
Sec-WebSocket-Protocol: chat, superchat
Sec-WebSocket-Version: 13
If all goes well on the server and the request can be accepted then the server handshake will be returned.
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
If an error occurs and the server cannot accept the request, than a HTTP 500 should be returned to indicate that the request has failed and that the protocol is still HTTP.
Once the client server handshake is completed the TCP connection used to make the initial HTTP request has now been upgraded to a WebSocket connection. Messages can now be sent from either the client to the server or the server to the client.
As a developer most of the nuances of the WebSocket handshake are hidden away by the platform specific APIs and SDKs. In the .NET world Windows 8 and Windows Server 2012 introduced native support for the WebSocket protocol. In addition Internet Explorer 10 introduced native support for the WebSocket protocol as well. Also a variety of other platforms support WebSockets.
Using the .NET 4.5 Framework the client code to establish a WebSocket connection in C# would look like this. ClientWebSocket webSocket = null; webSocket = new ClientWebSocket(); await webSocket.ConnectAsync(new Uri(“ws://localhost/Echo”), CancellationToken.None);
Once the connection succeeds on the client the ClientWebSocket object can be used to receive and send messages.
Using the .Net 4.5 Framework on a simple server using HttpListener, the C# code to accept a WebSocket request and complete the hand-shake would look like this. HttpListenerContext listenerContext = await httpListener.GetContextAsync(); if (listenerContext.Request.IsWebSocketRequest) { WebSocketContext webSocketContext = await listenerContext.AcceptWebSocketAsync(); WebSocket webSocket = webSocketContext.WebSocket } else { //Return a 426 – Upgrade Required Status Code listenerContext.Response.StatusCode = 426; listenerContext.Response.Close(); }
The call to AcceptWebSocket request returns after the server handshake has been returned to the client. At this point the WebSocket object can be used to send and receive messages.
WebSocket messages are transmitted in “frames.” Each WebSocket frame has an opcode, a payload length, and the payload data. Each frame has a header. The size of the header is between 2-14 bytes. As you can see the header overhead is much smaller than the text based HTTP headers.
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | A | B | C | D | E | F |
Final | Reserved Bits | OpCode | Mask | Payload Indicator | |||||||||||
Extended payload length ( present if payload is longer than 125 bytes ) | |||||||||||||||
Extended payload length ( present if payload length is >= 2^16 ) | |||||||||||||||
Extended payload length ( present if payload length is >= 2^16 ) | |||||||||||||||
Extended payload length ( present if payload length is >= 2^16 ) | |||||||||||||||
MaskingKey ( present if masking bit is set ) | |||||||||||||||
MaskingKey ( present if masking bit is set ) |
The first 9 bits sent in every WebSocket frame are defined as follow
The variable length of a WebSocket header is based on the size of the payload and the masking-key
The following table below defines WebSocket frame OpCodes. Applications should only set the Text or Binary OpCodes to specify how the payload data in the frame is interpreted.
Code | Meaning | Description |
---|---|---|
0x0 | Continuation Frame | The payload in this frame is a continuation of the message sent in a previous frame that did not have its final bit set |
0x1 | Text Frame | Application Specific – The payload is encoded in UTF-8 |
0x2 | Binary Frame | Application Specific – The payload is a binary blob |
0x8 | Close Connection Frame | Specifies that the WebSocket connection should be closed |
0x9 | Ping Frame | Protocol Specific – sent to check that the client is still available |
0xA | Pong Frame | Protocol Specific – response sent after receiving a ping frame. Unsolicited pong messages can also be sent. |
Sending and receiving WebSocket messages is easy using the .NET Framework APIs.
byte[] receiveBuffer = new byte[receiveBufferLength]; while (webSocket.State == WebSocketState.Open) { WebSocketReceiveResult receiveResult = await webSocket.ReceiveAsync(new ArraySegment<byte>(receiveBuffer), CancellationToken.None); }
The WebSocketReceiveResult object contains the information sent in one WebSocket frame including the OpCode, Final Bit Setting, Payload Length, and CloseStatus & Reason if its a Close Connection Frame. The receiveBuffer will be populated with the data sent in the payload.
Sending a message is also simple and an Async method is provided in the .NET 4.5 Framework. The code below echos the message received back over the channel. The data, Message Type, and Final Bit are specified in the parameter list. await webSocket.SendAsync(new ArraySegment<byte>(receiveBuffer, 0, receiveResult.Count), WebSocketMessageType.Binary, receiveResult.EndOfMessage)
Either endpoint can close the WebSocket connection. In order to do this the endpoint starts the WebSocket Closing Handshake. The initiating end point sends a WebSocket message with a closing status code, and an optional close reason (text), and sets the Opcode in the message to the Close Connection Frame (0x8). Once the message is sent the endpoint will close the WebSocket connection by closing the underlying TCP connection.
As an application developer it is important to note that either endpoint, server or client, can initiate the closing handshake. Practically this means both endpoints need to handle receiving the close frame. It also means that some messages may not be delivered, if the connection is closed while the messages are in transit.
Connection Close frames should include a status code, which indicates the reason the WebSocket connection was closed. These are somewhat analogous to HTTP Status Codes.
Code | Definition | Description |
---|---|---|
1000 | Normal Closure | The purpose for which the connection was established has been fulfilled |
1001 | Endpoint Unavailable | A server is going down, or a browser has navigated away from a page |
1002 | Protocol Error | The endpoint received a frame that violated the WebSocket protocol |
1003 | Invalid Message Type | The endpoint has received data that it does not understand. Endpoints which only understand text may send this if they receive a binary message and vice versa |
1004 -1006 | Reserved | Reserved for future use |
1007 | Invalid Payload Data | The payload contained data that was not consistent with the type of message |
1008 | Policy Violation | Endpoint received a message that violates its policy |
1009 | Message Too Big | Endpoint received a message that is too big for it to process. |
1010 | Mandatory Extension | An endpoint is terminating the connection because it expected to negotiate one or more extensions |
1011 | Internal Error | The server is terminating the connection because it encountered and unexpected error |
1015 | TLS Handshake | Used to designate that the connection closed because the TLS handshake failed. |
Code | Definition |
---|---|
0-999 | Not Used |
1000-2999 | Reserved for use by Protocol Definition |
3000-3999 | Reserved for use by libraries, frameworks & applications. These should be registered with IANA |
4000-4999 | Reserved for private use and can’t be registered. |
Once again most of the details are dealt with by WebSocket libraries in your framework of choice. Application developers must decide when the connection should be closed, should set the appropriate connection close code and may also set a connection close reason.
The .Net Framework makes this very easy, by providing an asynchronous method, which takes in the connection close code, and close reason as parameters. await webSocket.CloseAsync(WebSocketCloseStatus.NormalClosure, “Normal Closure”, CancellationToken.None);
As mentioned before Windows 8 and Windows Server 2012 introduced native support for the WebSocket protocol. Also because the Xbox One is running a variant of the Windows 8 operating system it also has built in support for WebSockets.
Version 4.5 of the .NET framework introduced support for WebSockets through the System.Net.WebSockets namespace. The underlying connection is passing through HTTP.sys in the kernel so timeout settings in the HTTP.sys layer might still apply.
WinRT only exposes APIs for creating a WebSocket client connection. There are two classes to do this in the Windows.Networking.Sockets namespace, MessageWebSocket & StreamWebSocket.
The WinRT API is also available to C++ developers. For developers that want more control WinHTTP provides a set of APIs for sending WebSocket upgrade request, and sending and receiving data on WebSocket connections.
All the latest versions of common browsers, with the exception of Android, support the WebSocket protocol and API as defined by the W3C.
The ASP.NET team has built a high-level bi-directional communication API called SignalR. Under the hood SignalR picks the best protocol to use based on the capabilities of the clients. If WebSockets are available it prefers to use that protocol, otherwise it falls back to other HTTP techniques like Comet and Long Polling. SignalR has support for multiple languages including .NET, Javascript, and iOS and Android via Xamarin. It is an open source project on GitHub.
WebSockets are a great new protocol to power real time applications and reactive user experiences due to its lightweight headers, and bi-directional communication. It is also a great protocol for implementing Pub/Sub messaging patterns between servers and clients. However WebSockets are not a silver bullet for networked communications. WebSockets are incredibly powerful but do also have their drawbacks. For instance because WebSockets require a persistent connection, they are consuming resources on the server and require the server to manage state. HTTP and RESTful APIs are still incredibly useful and valid in many scenarios and developers should consider the uses of their APIs and applications when choosing which protocol to use.
Сообщение A WebSocket Primer появились сначала на caitiem.com.
]]>Сообщение Origin Story: Becoming a Game Developer появились сначала на caitiem.com.
]]>TLDR; My first Console was a SNES. I learned to program in High School. I attended Cornell University and got a B.S. in Computer Science. My first job out of college was as a network tester on Gears of War 2 & 3. I joined 343 industries as a Web Services Developer in January of 2010, and recently shipped Halo 4 on November 6th 2012.
My story starts out in the typical fashion I fell in love with Video Games after my parents got me an SNES as a kid. However, here is where my story diverges, my career in the games industry was not decided at 7.
In fact I had already chosen my career a few years earlier. When I was 5, I announced to my mother that I did not need to learn math because I was going to be a writer when I grew up. I had an active imagination, and loved exercising it by writing stories of my own. My first major work was a story about ponies entitled “Hores.” Luckily my parents would not let me give up on math, and helped me with my spelling.
It turned out that I actually did enjoy math, I just was ahead of my classmates in comprehension which is why I found it boring in grade school. In Middle School I was placed into the Advanced Math program along with about 25 other students selected to take accelerated courses. I enjoyed the problem sets and challenges, and more importantly I excelled at them. This put me on Mrs. Petite’s short list of students to recruit.
Mrs. Petite taught Computer Science at my High School, and she notoriously recruited any advanced math or science student to take her class. She was stubborn and didn’t take no for an answer so Sophomore year instead of having an extra period of study hall, like I originally intended, I was in her Intro to programming class, writing a “Hello World” application in Visual Basic.
Mrs. Petite quickly became my favorite teacher and I took AP level Computer Science classes Junior and Senior year learning C++ and Java, respectively. We learned programming basics, object oriented programming, and simple data structures with fun assignments like writing AI for a Tic-Tac-Toe competition, programming the game logic in Minesweeper, and creating a level in Frogger.
During High School I began to realize that I wasn’t just good at programming, but I truly enjoyed it. Computer Science wasn’t just a science, it was a means of creation. Like writing, programming gave me the power to start with a blank canvas and bring to life anything I could imagine.
“Programming gave me the power to start with a blank canvas and bring to life anything I could imagine.”
Throughout Middle School and High School I played my fair share of video games. Most notably I acquired a PlayStation and raided dozens of tombs with Lara Croft, and played Duke Nukem 3D my first First Person Shooter, but games were still not my main focus. I ended up spending more of my time programming, playing lacrosse, singing in choir, participating in student council, and spending time with my friends. Video Games were great, but I still had not decided to pursue a career in the Games Industry.
I graduated from High School not only having learned to program in Visual Basic, C++, and Java, but with a passion for programming. In the Fall of 2004 I decided to continue on my coding adventure by enrolling in the Engineering School at Cornell University focusing on Computer Science.
I entered Cornell University expecting to major in Computer Science, but to be sure I dabbled in other subjects Philosophy, Evolutionary Biology, and Civil Engineering before declaring my major. To this day I still have a diverse set of interests and I enjoyed all of these subjects immensely, but none of them lived up to the joys of coding.
College was this beautiful, wonderful, stressful blur. I ran on massive amounts of caffeine and memories of crazy weekends spent with friends. We worked really hard, but played really hard too. Even with all the pressure, stress, and deadlines I was having the time of my life. The classes were fast paced, I was being challenged, and I was learning an immense amount from Data Structures to Functional Programming to Graphics to Security.
Sophomore year I declared myself for CS, and also became a Teaching Assistant for CS 211 (Object Oriented Data Structures and Programming). In addition another immensely important event happened in the fall of my Sophomore year: I bought an Xbox 360, and Gears of War. I loved the game, and spent many nights during winter break staying up till 2am chainsawing locusts. I also spent a significant amount of time playing Viva Piñata that break, like I said diverse set of interests. This new console, some fantastic games, and the Xbox Live enabled social experiences reignited my passion for gaming. Now I began to consider Game Development as a career.
After Sophomore year I took a somewhat unconventional but completely awesome internship at Stanford’s Linear Accelerator Center (SLAC). I lived in a house with 20 brilliant physics majors, learned about black holes, dark matter, and quantum computing while helping to manage the Batch farm which provided all the computing power for the physicists working at the center. It was an absolutely amazing experience.
After Junior year I once again went West for the summer. This time to Redmond Washington as a Microsoft intern working on Windows Live Experiences (WEX). During that summer I got to exercise my coding chops and most importantly fully solidified the opinion that I wanted to be a developer. I left the Pacific North West at the end of summer with two job offers in WEX, but by then I knew I really wanted to work on games. So after some negotiation and another round of interviews I managed to secure a 3rd offer in Microsoft Game Studios as a Software Engineer in Test working on the Networking and Co-op of Gears of War 2. I was beyond thrilled.
I graduated from Cornell in 2008 with a Bachelors of Science in Computer Science from the Engineering School. It was a bittersweet moment, I had loved my time at Cornell and most of my friends were staying on the East Coast, but I knew exciting things were waiting for me in Seattle.
In July of 2008 I moved out to Seattle, and joined the Microsoft Game Studios team working on Gears of War 2. I quickly was thrown into the fire as I was assigned ownership of testing the co-op experience. It was terrifying and exciting to be given so much responsibility right away. I eagerly jumped into the project and joined the team in crunching immediately after starting.
The first few months in Seattle were a whirlwind as we pushed to get the game through to launch. The hours were long but I was passionate about the project and I was learning a lot. It was an amazingly gratifying experience the day Gears of War 2 went Gold. When the game launched I had another immensely satisfying moment; my computer science best friend from college and I played through the game in co-op and at the end we saw my name in the credits. Life Achievement Unlocked!
I love social game experiences, both collaborative and competitive; So post launch I focused a lot of my energy on improving my skills in the areas of networking and services. So as we moved into sustain on Gears of War 2 I began focusing on the matchmaking and networking experience. I spent my free time diving through the Xbox XDK, learning about the networking stack, and playing around with Xbox Live Services. As work began on Gears of War 3 I took ownership of testing the matchmaking code and became very involved in dedicated servers for multiplayer.
In the Fall of 2009 I was asked to temporarily help the fledging 343 Industries studio ship one of the first Xbox Title Applications, Halo Waypoint. I knew it would mean extra hours and a lot of work, but the opportunity to work on new technology, and make connections in other parts of Microsoft Game Studios was too good to pass up. I dove headfirst into the transport layer of the Waypoint Console app, and helped get them through launch in November 2009.
The next few months I began to evaluate what I wanted to do next in my career. Working on Gears of War 3 was a great opportunity, but I really wanted to do be a developer. The parts of my testing job that I found most satisfying were designing systems, coding internal tools, and researching new technology. So when the opportunity to join 343 Industries as a developer appeared in January 2010 I jumped at it. It was a perfect fit. After reaching out to my contacts in 343 and then participating in a full round of interviews I was offered a position on the team as a web services developer to write code that would power the Halo Universe and enable social experiences; I excitedly accepted!
One of my first tasks at the studio was working on the Spartan Ops prototype. I was elated that I got to utilize both my technical and creative skills to help create a brand new experience; my Spartan adventures were off to an amazing start! The rest is history and a few years later we shipped Halo 4. After launch I once again had a intense moment of elation after playing through Halo 4 on co-op with my college bff and seeing my name in the credits. It never gets old.
Some thoughts, all my own and anecdotal. To be successful as a Game Developer first and foremost you have to be passionate about what you do, whether it is programming, art, design, writing, or something else. You need to be passionate about games and your chosen field. In addition I believe my love of learning has been a huge asset in my career development and growth. I am not afraid to dive into new technologies, or get my hands dirty in a code base I do not understand. I believe doing this helped me get into the industry, and continuing to do so makes me valuable. Lastly do not be afraid to ask for what you want, no one is going to just hand you your dream job. Of course there is a bit of luck and timing involved in breaking into the Industry, but working incredibly hard is the best way I know to help create those opportunities.
Сообщение Origin Story: Becoming a Game Developer появились сначала на caitiem.com.
]]>Сообщение Design Docs, Markdown, and Git появились сначала на caitiem.com.
]]>Our original design doc process involved writing a Microsoft Word document and sharing it via SharePoint. Feedback was gathered via in person reviews, doc comments, and emails. Approval was then done over email. To signal that a document was the “approved plan of record” versus “an under review draft”, we toggled a property on the document. Users could filter documents on the SharePoint by this property to disambiguate between the two states.
This worked fine when we were a small team, with a small number of documents, but became challenging as the team grew. For context the Azure Sphere team, started out as a handful of people working in Microsoft Research and has grown rapidly over the past 3 years as we’ve went from research project to Generally Available Product.
Some specific challenges were identified via the AS3 retrospective process. When evaluating new options we kept these pain points in mind:
To address some of these challenges the AS3 team began writing design documents in Markdown and checking them into a new EngineeringDocs Git repo in Azure DevOps (ADO). Reviews are conducted via pull requests by adding comments, pushing changes, and then resolving comments. Approval was given by signing off on a pull request, anything in master is considered the approved plan of record. Versioning was also greatly simplified as anyone could submit a pull request to update the document.
One of the first early decisions we made was where in the codebase design documents should live. We discussed two options
We chose to use a Single Repo for several reasons:
The Azure Sphere team uses the OARP model for making decisions, so the below section describes approval and stakeholders in this context. I recommend having a well defined decision making process and integrating whatever that is for your team into the design document process.
Identify Reviewers and Approvers via a Pull Request
The first step in our Design process is identifying the stakeholders. The first pull request includes the title of the Design Doc and a table listing the OARP assignments for this document. The pull request author is always the Owner.
This serves a few purposes:
Once the stakeholders are all identified, the Approver approves the pull requests, and the Owner checks in the pull request.
Writing the Design Document
To author the design document the owner creates a new branch modifying the checked in shell document. It is highly recommended that input from Reviewers and Approvers is informally gathered prior to writing the document. This can be via white board session, chats, hallway conversations, etc… This ensures that the design review process is more collaborative, and there are few surprises during the formal review process.
Design docs are written in Markdown. Architectural diagrams are added to the design doc by checking in images or using Mermaid. The AS3 team often generates architectural images using Microsoft Visio. It is highly recommended that these Visio diagrams are checked in as well for ease in modifying later.
Once the design doc is ready for review, the engineer submits a new pull request. All members of the OARP model are listed as reviewers on the pull request.
Design Pull Request
Once the pull request has been submitted, design review stakeholders can read and submit feedback via comments on the pull request. All comments must be addressed and marked as either resolved via document updates or won’t fix.
The document can be committed to master once the Approver has approved the pull request. This design is now considered a plan of record.
Design Review Meeting
Design review meetings are not required but often held. A meeting invite is sent out ahead of time. Owners, Approvers and Reviewers are considered Required attendees, Participants are considered optional.
The meeting invite should be updated with a link to the pull request for the design doc to be reviewed, at least one business day prior to the meeting. The first 10-15 minutes of the meeting are set aside for folks to read the document and add comments to the pull request if they have not done so already. In either scenario feedback is added via comments on the pull request.
We provide two ways for folks to review the document, ahead of time or in the meeting to accommodate multiple working styles on the team. So folks prefer to digest and think about a design document for a while before providing feedback, others are more comfortable providing feedback on the spot.
After the reading period the design review meeting spends time focusing and discussing the comments. The owner takes notes and records the in room decisions in the pull request comments.
Updating the Design
Throughout the course of the project design docs may need to be updated. This can happen after design if a major change was made in implementation, or could be later in the life of the project as a new feature or requirement requires a modification.
Updating the design doc, follows a very similar process. A pull request with proposed changes are submitted. The original Owner and Approver should be considered required reviewers.
The AS3 team considers the experiment incredibly successful so much so that the broader Azure Sphere team has begun adopting it, including the Program Managers.
To summarize all the challenges we experienced with Word Documents and SharePoint were addressed by using Git and Markdown.
By utilizing a toolchain that developers already use day to day the process feels more lightweight, and writing design documents feels like a less arduous process. The Program Management team has also been incredibly receptive to using Markdown and Git. While these are new tools for some of them, they’ve embraced our growth mindset culture and dove right in.
One of the biggest benefits I’ve observed is the clarity it has brought to how decisions are made, and durably recording when things are done. On a fast growing team like Azure Sphere having clarity and durable communication are key to successfully scaling the business and the team.
Сообщение Design Docs, Markdown, and Git появились сначала на caitiem.com.
]]>