Thursday, October 31, 2019

Week 13: Last Post

The end is nigh.

While we still have two more weeks until we are presenting, the blogs are due tonight! So I will do my best to update on this week and give an idea of what we are doing going forward.

This Week

This week I focused on the drill-down functionality, I wanted to get as much of this done as I could before the final sprint. In the end, I got about 3/4th's of the work done. Throughout this week I have collaborated with Patrick quite a bit as our work was heavily crossing over. By the end of the week, we have our work pretty up to date on the merging side of things. 

To get the 'ontimeratio' bar chart drill-down working I had to make some changes to Chris's Jobs endpoint in the dispatcher API. This change allows it to take an extra parameter that if given changes the database query to return only jobs that were late OR on time. It is super satisfying to see the drill-down working in action.


Sending a different URL request dependant on what element the user blicked on the bar chart.

I noticed I wasn't getting much sun even though it's so nice outside so this week I begun taking short 5-10 minute sun breaks when I felt like it. This also gave me time to clear my head or think about the current problem that I'm on which helps a lot in a developer environment I think. In my next job, I think I'll prioritize these mini-breaks a lot more.

Kayla left this week for greener pastures in a field she is more familiar with (medical), Kayla was the operations manager and the one that instigated the procurement of me and old mate Pat as project students in the first place. So we owe a lot to her for giving us this opportunity and I think we will miss having her around for the last couple of weeks we have at fieldGo. Now we are just a bunch of blokes haha.

Going Forward

The final sprint will have us focused on the presentability of the project due to us using it as a demo at our poster evening in a couple weeks, this will result in us working on outstanding graphical bugs as well as some crucial functionality bugs. It will also involve a wide range of testing to hopefully let us pick up on anything that might break the application during a demo. Our current dataset could also use more fleshing out, I think we would be wise to select a dataset that looks a little more realistic.

I still intend to put a little extra time in after the demo in order to clean up our work so that Chris doesn't take on a huge burden in order to implement it after we leave.

After work we had a meetup at a pub to see Kayla off, it was super enjoyable and we had some good laughs. I think I've been lucky to get this position and to be apart of this super cool team even if it was just for a short amount of time.



Thanks for reading and thank you to fieldGo for taking us on this semester. It's been great.

Week 12: Fourth Sprint

Going into this demo we had Kerry with us, the CEO of the company, he has some good suggestions for high level functionality of the page, some design changes which old mate Pat was to take care of and a drill-down feature wherein if you click in the element of a graph it will flick you over to the main page and display a list of the jobs used to determine the statistic that they clicked on. I volunteered to take this on as it was quite apparent that old mate Pat was becoming a bit swamped with front-end tasks.

After I have finished the drill-down functionality I intend to move on to resolving some outstanding front-end bugs before our presentation on the 14th. During my work on the drill-down I found a couple bugs, one on my end and another on old mate Pat's end. Luckily they were quite easy to resolve so we fixed these up and continued ahead.



The process of implementing the drill-down involved a bit of learning to start with whilst working on my first goal of getting it working for the allocated vs unallocated pie chart.

To implement drill-down I had to add onto old mate Pat's existing code, I added a listener to the chart which executes a method of my choosing upon a user clicking on one of the graph elements. Said method takes the data required by the scheduler page and navigates over there using the data as parameters. From there the scheduler page filters itself as required to show the jobs related to the element the user clicked on. A use case example for this would be if they wanted to click on the 'allocated' portion of the pie chart to see which jobs are allocated, it would then flick them over to the scheduler page and show them the jobs, like so:


After getting over this initial learning hump I moved onto incorporating the global filters into the drill-down, so that if there are global filters set they also come over to the scheduler page and reflect in the jobs shown.


.
above is a snippet of code wherein I am receiving the parameters into a local object then following that I execute a switch statement that discerns the element clicked on from the dashboard page then calls the setFilters() method with appropriate parameters.

Next weeks blog will be the final one, not because we are done at fieldGo but because the blogs are due. Oh well, I'll try to give it a nice ending haha.

We finished off the week with both of us making good progress into drill-down and filtering. Hopefully next week we can polish these two features off and spend the rest of our time ironing out the kinks before presenting the demo on the 14th.




Wednesday, October 30, 2019

Week 11: Getting back into it

Upon returning from the break we quickly began to feel like we were running out of time, upon reassessing our outstanding goals with the project it became quite clear that old mate Patrick had more outstanding tasks than me, going forward I was going to help him on the front-end.

This week was somewhat short in that upon returning I spent some time familiarizing myself with where I was up to previously, I spent some time delving back into unit testing in the front-end to polish off the week. Not too much to mention here as I was pretty stuck! But felt like I was pretty close to working it out as the week ended.

Something I forgot to mention in the previous weeks blogs was that I had also spent some time refactoring the integration test code in the back API solution, in it's initial state each endpoint/projects tests where all reliant on the same test model (fake database for testing). This meant that when a new test required a change to the database the developer would then have to go forth and change all affecting unit tests for several projects. This made the tests hard to maintain which was pretty bad. I came up with a way for each project to have it's own test model but it required a lot of repeat code across the projects. With Chris we tried to find a way to resolve this repeat code issue until we ran into an error, at this point I had to move on with more pressing issues but I intend to come back to this and implement my somewhat hacky solution after the poster evening. Although it is 'hacky' it is still more maintainable than the previous setup.

Tests are far more likely to get written and used if they are easy to maintain.

Since our MVP was complete our next concern moved to filtering, my endpoints where for the most part already ready for this functionality until we had a few suggestions from the team that required me to make some corrections which was no worries.



We plan to have 3 global filters and one filter that only applied to the 'ontimeratio' data, for old mate Patrick to achieve this I needed to communicate with him often regarding how to correctly use the endpoint, it was good to be working next to each other which allowed for effective communication.

There where a couple more design changes implemented by old mate Pat on the front-end, two statistics that were shown as numbers were merged into a single bar chart which greatly improved the readability of the page, nice.




Week 9: Authentication

This week the main task I worked on was implementing full authentication to the API, last week I implemented user permissions in respect to the data the API returns but not full authentication. In order to implement auth I had to learn how the current auth system was setup so that I could integrate it with my own. I went down quite the rabbit hole here in attempt to get my head around it all, I ended up speaking to Chris and he took me over the basics which gave me enough information to continue forward. With the advent of my pursuit into implementation authentication Patrick also pursued the same venture, this involved something similar on his end wherein he had to work with the already existing auth system.

This was quite a learning curve but I managed to get my head around the basics, I might need to pursue further education into this but that's how it goes for developers, learning on the job is the norm.

Auth is assisted by a separate authentication API which both the front-end and back-end system communicate to, this API handles tokens and handles checking of user-specific permissions

At this point Pat had made quite good progress with the frontend, this allowed for the team to give some design related feedback in order to improve the UX/UI design.


Along with the auth I needed to also program the equivalent tests in order to ensure the functionality worked in practice, I made a couple tests in order to account for uses with restricted permissions and a user that is not restricted. If a user has restrictions then the API only returns jobs types that the user has permission to see.

With the completion of auth we are nearing the end of implementing the MVP product we set as our first goal, if we polish this off in time we will be able to move onto added functionality in order to fill in the rest of our time at fieldGo, this would be great as it will allow us to have a more functional page to show at our poster evening.


Now would be a good time to talk about a program I am using often to test my endpoints, Postman is an awesome application that allows you to send API requests and assess with detail the request and response in detail. Would recommend to API developers for sure.

Week 8: Second Sprint

By this point I have implemented all the endpoints to the currently required functionality, I have minimal levels of filtering implemented at this stage, I suspect that I'll iterate on this in the future.

Patrick has succeeded at consuming my endpoints for the front end so it's nice to see it all working together. I enjoyed the phase of writing the database query logic, this involved a fair bit of problem solving. For one of the endpoints I had made it inefficient by calling to the database more times than needed, Chris pointed out a smarter direction in which I would only need to call the database once, I learnt the lesson of paying attention to how often I am doing intensive operations like this. In small applications performance is often not an issue but in a business environment like this the software can be holding mass amounts of data, any small inefficiency can heavily affect the programs performance.

The sprint demo commenced as soon as we arrived on Tuesday, with little time to prepare I booted up the API into debug mode and prayed everything worked. Nothing broke which allowed me to gather feedback as to how it was configured and also some extra filtering functionality that I could add in anticipation of us getting to filtering on the front-end further down the line.

Following the demo I moved onto implementing the feedback before we were to do sprint planning the next day. Chris had also pointed out another task regarding the endpoints that I'll need to do, this was the implementation of User Permissions into the endpoints calls, this allowed the endpoints to only return data that the requesting user had permission to see. This would involve quite a bit of re-configuring and a lot of learning into how the structure of user authentication works.


Above is some example code from one of my endpoints, this is inside a service that my statistics interface is implementing, in this I am taking some filter parameters and constructing a database query that I am using to determine the ratio of jobs completed late vs on time. For each filter parameter added I then add additional 'where' clauses to the query to filter to the desired result.


Here at the end of the function I resolve the database query and do some simple math to resolve the ratio at which the selected jobs where completed on time. If it breaks then we need to log the error and return the null result for handling in the controller.

The back-end system follows the above structure for all endpoints, utilizing dependency injection in order to make our service code modular and reduce the dependency between back end operations. The interface sits between the service and the controller and acts to ensure that the service will fulfill the interfaces required public facing methods. 

I'm pleased to be gaining a heightened understanding of the code I'm working with, this will make it easier to add authentication and other features down the line. 



Week 7: Learning curves

Second week into the sprint I encountered some learning curves, iteration has become the name of the game due to my initial code not being entirely optimal or capable. Luckily Chris is very helpful and ready to help me out if I find myself getting a little too stuck for my own good. I try my best to dig myself out before going for help though, usually this results in him giving me a quick answer since I understand the question I'm asking so well.

We utilize swagger (https://swagger.io/docs) to generate documentation for the API endpoints, this allowed Patrick to easily understand how the API is used without me putting too much effort into writing extensive documentation.

Integration testing proved to take up a lot of my time this week as I constructed test cases which helped me realize my API's logic needed reworking, I can see why people talk a lot about test-driven development wherein developers write the unit/integration tests before writing the code themselves. My tests were testing the entire API using a test model database so that it was not dependent on a live database. 



Pictures above is some integration testing code, I tried to cover as many edge cases as possible but I think I'll find myself iterating on this in the future.

Not a whole lot to report for this week, I spent a large amount of time reading code haha.

Week 6: First sprint

The first sprint.

Image result for sprint



The week kicked off with the sprint demo for the previous week in which Chris showed his progress with the dispatcher (the web app upon which we are building our dashboard extension on) and Steve the remote developer in Australia also spoke on his progress. It's cool to be apart of the team in this aspect and even be able to add my own input in some cases. From the teams feedback Patrick was to proceed with using google charts for the graphical elements of the dashboard page. I presented some progress I made on a couple endpoints in order to gather feedback which proved fruitful, it took a bit to get my head around it all due needing to learn the code I was working with.

The next day we had the sprint meeting which is where we plan out our user stories for the coming two weeks and clean up last weeks sprint if necessary. We also do this fun thing where we assign 'story points' to our user stories in attempt to estimate the amount of time they will take. We do this as a team, all voting at the same time. Usually we then go with the median score but sometimes we will have a discussion and agree on a number following that, this turns out to be quite a great tool when it comes estimating the time to complete development tasks.

So the first sprint began, yay. We set out to work on our user stories, at this point I had constructed a couple endpoints so for the rest of the week I continued to iterate upon them and also got another one up by the end of the week.


This sprints tasks, note that this involved a lot of behind the scenes work in learning/researching how this all work. This also involved my first foray into .NET unit testing which involved a slight learning curve as well.

Patrick and I worked in tandem in that I setup endpoints in a comfortable order for him to incorporate them into his work, saving the trickiest for last.

Before making our own branches in the projects repository we also did some research into git-flow which is an abstract idea of a git workflow, Chris was using this to manage his code so we were to follow suit in order to keep the branches and commits nice n tidy.

Snippet of code from my first endpoint created.

Tuesday, October 29, 2019

Week 5: Planning

Project Rundown

We are to create an 'add-on' for an upcoming software release for fieldGo, said addon will consist of a dashboard that connects to an API which will serve the desired statistics from the clients fieldGo database.

After studying with Patrick for some time I was well aware that he would prefer the front-end side of development whilst I prefer the back-end so that works out pretty well! He confirmed my suspicions and we agreed to split the work down the middle for now keeping in mind that he or I might need to help the other in the future.

The dashboard is to show the following statistics:


The project was surprisingly quite achievable which took a weight off our shoulders, we can focus on this MVP product before potentially expanding on it in the future.

It was entirely up to us to come up with the design and present it to the team for approval which left us with a comfortable level of autonomy, we took to the whiteboard to sketch out our design then the legend Patrick sketched it up in a wireframing tool:


Basic design as you can see but it allowed us to present our ideas in a way that would easily invite criticism, following some feedback from the team we locked the design in and moved ahead in our planning phase. Patrick was planning to research some different plugin options for displaying graphs our dashboard page, I was moving into planning the API before developing as advised by Chris. 

To plan the API I did up an API specification document in which I documented what the API would take as parameters and what data it would return, this allowed me to make quick drafts and then hardness Chris's infinite wisdom to allow me to plan further ahead. After draft version 3 we felt comfortable with moving into development.

Our next task before moving in development was planning of our user stories, I went on to plan my backend tasks & Patrick took care of his. Going into next week we felt prepared and ready to go!








Saturday, October 26, 2019

Week 10: Sprint Three

At this weeks demo I didn't really have anything to demo due to all my work being back-end related & unit tests. If I was to demo it would just be me scrolling through code which isn't really the purpose of this session. Fortunately Patrick's side of things had plenty to demo which in proxy shows that my end of the system also works, he made quite a few design corrections which also looked nice.

Onto sprint planning the next day I was seemingly done with the API at this stage so I was to move onto unit testing for the front-end angular project, Angular2 comes with Jasmine and Karma built in for unit testing so that was handy.



Above is a snippet of a unit test I constructed, thankfully I got at least this done before we went on our two week break. There proved to be quite a learning curve when it came to unit testing the angular project, I suspect this may have been due to me not having worked in the front end previously and also because I was writing tests for code that I was not familiar with at all. The main pain point here proved to be in the services that relied on HTTP responses from API's. In order to test the service in isolation I had to fake the HTTP response so that the test did not rely on any code outside the service. I tried quite hard to get me head around this but didn't manage to before we headed off on the two week break.



We will see if I am to continue with unit testing when I return or if there is something of higher priority.

I am certainly welcoming the break which will allow me to catch-up on the elective class I am taking alongside the project.