3 Filipinos among Asia’s best in boxing

first_imgShanghai officials reveal novel coronavirus transmission modes ADVERTISEMENT Sports Related Videospowered by AdSparcRead Next MOST READ Three Filipinos won in the 2016 Asia’s Best in Boxing Awards, Asian Boxing Confederation President Serik Konakbayev of Kazakhstan announced on Wednesday.Abap’s 2014 discovery Criztian Pitt Laurente, 16, of General Santos City, was cited as Best Junior Boxer in Asia for 2016 for his spectacular Gold Medal win in the Children of Asia Tournament held in Yakutsk, Russia last July. Laurente won the only gold medal for the Philippines in that competition, where Filipinos competed in nine different sports.ADVERTISEMENT Laurente comes from a family of boxers. His father Cristino is one of General Santos’ foremost boxing coaches, elder brother Criz Sander is also a national team member, mother Rosalinda is an ABAP National Referee-Judge.Long-time Abap head coach Patricio Gaspi was acknowledged as the Best Asian Coach. Gaspi, a former national boxer, is the Chairman of the ASBC Coaches Commission and is a certified AIBA 3-Star Coach (the highest level) and is also a certified AIBA Coaching Instructor. He is also a member of the AIBA Coaches Commission.FEATURED STORIESSPORTSGinebra teammates show love for SlaughterSPORTSWe are youngSPORTSFreddie Roach: Manny Pacquiao is my Muhammad AliMaria Karina Picson, who passed the ITO (International Technical Official) Course and Examinations in Antalya, Turkey in 2011 and the AIBA Supervisor’s Course and Examinations in 2015 in Almaty Kazakhstan, was recognized as the 2016 Best Supervisor in Asia.She is certified for all three AIBA programs- AOB (AIBA Open Boxing, the Olympic program) WSB (World Series of Boxing, a team tournament fought in 5 rounds) and the APB (AIBA Pro Boxing, fought over 12 rounds). Picson is also a member of both the AIBA and ASBC Women’s Commissions. Taiwan minister boards cruise ship turned away by Japan PLAY LIST 01:31Taiwan minister boards cruise ship turned away by Japan01:33WHO: ‘Global stocks of masks and respirators are now insufficient’01:01WHO: now 31,211 virus cases in China 102:02Vitamin C prevents but doesn’t cure diseases like coronavirus—medic03:07’HINDI PANG-SPORTS LANG!’03:03SILIP SA INTEL FUND View comments As fate of VFA hangs, PH and US forces take to the skies for exercise Senators to proceed with review of VFA Don’t miss out on the latest news and information. Smart hosts first 5G-powered esports exhibition match in PH EDITORS’ PICK We are young Nadal looks to kickstart 2017 in Brisbane Where did they go? Millions left Wuhan before quarantine Abap President Ricky Vargas sent a congratulatory message to the three awardees calling them “Asia’s Best, Philippines’ Pride.”Vargas encouraged them to continue to strive for excellence in their respective roles and to inspire everyone in ABAP to do the same.There were 14 awardees in all including Danita Yeleussinov of Kazakhstan as Best Male Elite Boxer, China’ Yu Jin Hua (Best Woman Elite Boxer), Hayato Tsushima of Japan (Best Youth Boxer), Asian Discovery of the Year Shakhram Giyasov of Uzbekistan, Kuok Veng Vong of Macau as Best Asian ITO and Kazakhstan as Best Asian National Federation for 2016. Chinese-manned vessel unsettles Bohol town Chinese-manned vessel unsettles Bohol town Smart’s Siklab Saya: A multi-city approach to esports PH among economies most vulnerable to viruslast_img read more

Read More

Jigsaw: Alternative or Complement to LinkedIn?

first_imgTop Reasons to Go With Managed WordPress Hosting Tags:#enterprise#web Related Posts bernard lunn I am a regular user of LinkedIn, using it both for biz dev and recruiting. I am a fan of the service, but still a bit of a skeptic on the business model. I decided to look at alternatives and the one that gave some use was Jigsaw. According to RWW Companies, Jigsaw is “a provider of business information and data services that uniquely leverages user-generated content contributed by its global membership.” It claims to have 500,000 members and more than 500 enterprises using the product.My conclusion; Jigsaw is reasonable useful, despite some flaws. It is a complement to LinkedIn and not an alternative. Jigsaw came in for criticism when it first came along around 2006. But most services are raw when they debut, what matters is a) is the fundamental concept sound and b) can the management team continuously improve.Jigsaw has top tier VC financing, so they can go the distance.I chose Jigsaw to try because LinkedIn did not give me what I needed. I was searching for senior level contacts at a large company. LinkedIn gave me plenty of Level 3 contacts but I have learned from experience that Level 3 (somebody I know knows somebody who knows the contact) is not useful and can be a real time sink. So I only bother when I see a Level 2 contact.I could pay LinkedIn to send an InMail to these contacts, but that is no better than sending a cold-mail and why spend money when there are free alternatives?So my next stop was to Google the names. I saw Jigsaw coming up a few times so decided to give it a try. Fairly quickly I was able to get – at no cost – the contact details I needed. That is not as good as an intro but cold calling/mailing in limited doses can still do the trick.There were some niggling irritations as with many relatively new services, but I did get value and the basic concept seems like it could be viable. Jigsaw works on a “pay or play” principle. You can just pay to get access to the contact information, as with any list. Unlike traditional lists, you can buy just one name. So this works well for selling high value stuff to senior people, not good for mass market spamming. Play means earning points by contributing contact information back into the system. They seem to have evolved good systems for managing this to avoid gaming and bad data.So the data is user generated, as it is with LinkedIn, as opposed to scraped data from services such as ZoomInfo. Scraped data has value as well – you get the contacts that don’t put themselves into LinkedIn.However, data created by other people is not usually as good as data created by the person in question. I noted too many errors in my short stay on Jigsaw. I earned some points by correcting them, but this also made me question of the value of the data I had extracted.I see some value in Jigsaw, if they can keep improving. I don’t know how viable Jigsaw is as a business. It strikes me as an inexpensive service to run, so reaching profitability may not be too hard. But I don’t know whether this can be a really valuable business in its current form. It is too easy to get email addresses in other ways (Googling the name and just using the corporate email standard) and you can always call via the company receptionist.What has been your experience with Jigsaw? Have you worked with other alternative services?Jigsaw company profile provided by TradeVibescenter_img Why Tech Companies Need Simpler Terms of Servic… A Web Developer’s New Best Friend is the AI Wai… 8 Best WordPress Hosting Solutions on the Marketlast_img read more

Read More

Hadoop Batch File Processing with Hive – Part #2

first_imgCarter Shore is an Intel Software Engineer, part of the Intel  Distribution for Apache Hadoop Professional Services.  He is an industry veteran, having witnessed the births of Unix, C, C++, the PC, Apple, Relational Databases, the Internet, Cellphones, Java, Social Media, Hadoop, and many, many, many iterations of what is considered to be ‘Big Data’.In part 1 of this article, we discussed the Hadoop ‘Batch File Processing’ pattern, and how Hive can be used to craft a solution that provides read consistency, and decoupling from operational file processes.To review, most large scale environments will require that solutions address requirements and issues like these:Data Latency – What is the allowable delay between when the files arrive, and the data within them must become ‘visible’ or available for consumption.Data Volume – What is the aggregate size of previously landed data, and what is the estimated volume of new files per process period.Landing Pattern – How often do the new files arrive, is it a push or a pull, is it synchronous or asynchronous.Data Quality – Business and operational rules that describe structure, content, and format of the data, and disposition of non-conforming data.Accountability – How to track and account for the handling and disposition of every record in every file.Data Retention – How long must previously landed data remain ‘visible’ or available.Read Consistency – What is the access or consumption profile, and effect of either ‘missing’ or ‘extra’ records when data is queried or exported.Efficiency and Performance – Does the size or volume of the data, or the amount of consumption activity dictate performance levels which may require special design or methods to achieve.In part 1 we provided a simple example and a Hive solution that addressed some of the issues above.In part 2 we will discuss how the Hadoop ‘Small Files Problem’ is related to Batch File Processing, and describe some features of Hive that can be used as a solution.We will present a more complex Batch File example that has Small File issues, and a Hive based solution. Small Files = Big TroubleCurrent Hadoop distributions suggest that 10 million files is a practical threshold of concern. This is based on the size of the HDFS filesystem image that the namenode stores in memory, and the size of the entry needed for each file and each block of that file.If we receive a large number of files each day (100,000 for example), it only takes 50 days to reach that threshold.An easy solution would be to simply require that the data source aggregate the small files into larger files before sending them. But recall that in many cases, the Enterprise deploying the consuming Hadoop solution is not the owner of the source data, and may have little influence on its’ file characteristics, record format, encoding, delivery method, or scheduling.We may also be dealing with a large number of individual sources, or with transaction or event records, where latency requirements prevent accumulating the records for periodic aggregation.In addition to the sheer data volume per process interval, it’s also important to consider the number of files, and average file size. This is important because of the way HDFS is designed, to supports tens of millions of files. First, files are partitioned into equal sized blocks, by default 64 Mb. If a file is smaller than the blocksize, it will obviously fit entirely into only one block. Second, each file, no matter the size, requires an entry into the HDFS metastore.  So adding millions of small files each day, or week, or month will cause the metastore size to grow, perhaps eventually exceeding allocated memory space. Even if it fits, HDFS internal performance may be impacted. Some applications have seen performance gains of hundreds, even thousands of percent for the same volume, by applying aggregation. This is generally termed ‘The Hadoop Small Files Problem’.It’s possible to specify the blocksize that will be used for a file at create time, but the limitations above still apply. Most hadoop clusters specify a minimum blocksize, nominally 1 Mb, to avoid creating a swarm of small files.Average File Size = Interval Data Volume / Interval File CountIf the average file size is less than 50% of the default blocksize, then consider a file aggregation strategy. A simple aggregation method simply accumulates the files in a local filesystem landing area, and then periodically concatenates them into larger HDFS target file(s):cat / | hadoop fs –put – /The aggregation trigger might be the total landing filesize exceeding a threshold, or the end of a defined process interval, or some combination.Requirements for data latency and availability may prevent using this simple strategy. A more complex alternate method would place the files directly into HDFS as they land. Then, at some later point, those smaller files would be aggregated into a large HDFS file, and the smaller HDFS files removed.  This solution can result in temporary data inconsistency, since there will be multiple copies of same data during the time required to aggregate to a large file, and to delete the smaller files. If the process schedule and access requirements allow a maintenance window where read operations can be suspended, then this data inconsistency presents no issues. Otherwise, we must seek another solution.Hive’s support of data partitioning offers a simple and effective solution to read consistency, but also enables us to meet data latency requirements, while addressing the ‘Small Files Problem’.With partitioning, we define the Hive tables as usual, except that one or more elements that would normally be defined as columns, are instead defined as partition keys. We place the files containing data into folders with defined locations and names that correspond to the partition keys. The files in the partition folders will not become ‘visible’ as part of the table until we execute a Hive statement that explicitly adds the partition to the table. In Hive, table definitions are pure metadata, they are persisted into a metastore database, and have no effect on the actual underlying HDFS files.Hive itself is not a database, and it does not support transactions for dml statements, i.e. no commit/rollback. BUT, the metastore IS typically hosted in a database that does support transactional commit/rollback (MySql, Oracle, etc.). So ddl actions like CREATE/DROP/ALTER are atomic.In particular, a partition is added to a table with defined partition keys and values. The LOCATION clause associates that partition with an actual physical location where the files containing the records may be found. Once the partition is created, we can execute an ALTER statement to redefine that location, as a ddl transaction. This gives us a solution to the small file issue. We can create multiple distinct target HDFS folders that hold different ‘versions’ of our recordset, and ‘swap’ between them by executing an ALTER statement.  While a static version of the recordset is visible to the consuming processes, we can perform intake and consolidation of new files in another working version, without affecting read consistency. At the latency interval, we halt intake and file processing, and swap locations. The most recently processed records are now visible in Hive, and we can resume intake and file processing in the newly swapped out folder.  A complex exampleLogfiles from several sources land throughout the entire day. Logfile names  contain a create timestamp ‘YYYYMMDD.hhmmss.nnnnnn’ The logfiles may contain a variable number of records, from 1 up to millions, but average record count for a logfile is 1000. The individual log records are only a couple of hundred bytes. Average daily volume is 10 million records per day, but can peak at 50 million records per day. Overall volume is projected to increase by 20% per year.Our requirement is to process the log records and make them available in Hive within 5 minutes of landing. Retention period for the records is one year.In the average case, we will get 10,000 files per day, of 1,000 records, and in worst case, 50 million files of one record each.So we must implement a scheme to consolidate the smaller files to avoid overwhelming the namenode metastore, while still meeting our 5 minute latency requirement, and maintaining read consistency.Solution:A Hive table, ‘all_logs’, contains the records that will be consumed:CREATE EXTERNAL TABLE all_logs (Column_1….Column_N)PARTITIONED BY (log_timestamp string) ;We choose the time granularity of the partition as one day, which yields on average 10 M records, using 2 Gb of storage, and worst case 50M records using 10 Gb. An enterprise scheduler spawns a handler script every minute. The handler script examines the landing area for new logfiles, and checks to make sure that they are stable, i.e. fully landed. Each new stable file is streamed through data quality and formatting filters as it is copied to an HDFS partition folder.We employ at least 2 distinctly named HDFS working partition folders. One folder contains all records that have arrived so far in that day, and is associated with the current day’s table partition. The other folder serves to accumulate the new logfiles as they are copied from the landing area. At 5 minute intervals (dictated by our latency requirement), the folder roles are swapped by altering the partition location settings.For example, assume that process date is 2013/08/29. We create two working folders, ‘working_0’ and ‘working_1’, before the midnight transition.ALTER TABLE all_logs ADD PARTITION(log_timestamp = ‘20130829’) LOCATION ‘/all_logs/working_0’;After 5 minutes of accumulating the new logfiles into ‘working_1’ we will swap:ALTER TABLE all_logs PARTITION(log_timestamp = ‘20130829’) SET LOCATION ‘:/all_logs/working_1’;Note that we used as part of the SET LOCATION clause rather than just , because some Hadoop distributions will not accept a simple HDFS path when altering a partition location.Now we can consolidate the existing files in folder ‘working_0’ into 1 or more larger files, and also start landing the new files into it. After 5 minutes, we swap again. The swapping and accumulation continues throughout the process day. At day rollover, the final HDFS folder for the partition is created as ‘20130829’, all the files for the day are consolidated into it, and the final partition location is set to it. Working folders are truncated, redundant files are deleted, and a new daily process cycle begins.The net result is a far smaller number of consolidated files in each daily partition, while meeting requirements for 5 minute latency and read consistency. ConclusionWe have described the ‘Batch File Processing’ use pattern, and shown how it relates to the ‘Small Files Problem’. We discussed some of the important requirements and issues, and provided examples of solutions that provide read consistency, decoupling from operational issues such as archiving, file count management, and satisfaction of latency requirements, using standard Hive features.Cartercenter_img Other solutionsThere are alternative ‘Small Files’ solutions using both standard and specialized components:HAR Files – Hadoop Archive (HAR) files were introduced in 0.18.0. These can reduce the number of file objects that the namenode metastore must deal with by packing many smaller files into a few larger HAR files. HAR files do not currently support compression. No particular performance gain has been reported though.Sequence Files – Code is written to process multiple small source files into a target Sequence File, using the filename as the key, and the file content as the value. That target is then used as source for subsequent queries and processing. SequenceFiles are splitable, and also support compression, particularly at block level. SequenceFile creation and reading can be somewhat slower, so performance must be factored in.HBase – Stores data in MapFiles (indexed SequenceFiles). This can be a good choice when the predominant consumption profile is MapReduce style streaming analysis, with only occasional random lookup. Latency for random queries can be an issue.Filecrush – The Hadoop file crush tool can be used as a map reduce job or standalone program. The file crush tool navigates an entire file tree (or just a single folder) and decides which files are below a threshold and combines those into bigger files. The file crush tool works with sequence or text files. It can work with any type of sequence files regardless of Key or Value type. It is highly configurable, and a downloadable jarfile is available.Consolidator – A java Hadoop file consolidation tool written by Nathan Marz, can be found in the ‘dfs-datastores’ library. It can be integrated into custom code, or implemented as an add on component. There is little explicit documentation besides the code itself.S3DistCp – If you are running in the Amazon world, especially EMR, this tool can solve a lot of issues. Apache DistCp is an open source tool to copy large amounts of data in a distributed manner – sharing the copy, error handling, recovery, and reporting tasks across several servers.  S3DistCp is an extension of DistCp that is optimized to work with Amazon Web Services (AWS), particularly Amazon Simple Storage Service (Amazon S3). You use S3DistCp by adding it as a step in a cluster. Using S3DistCp, you can efficiently copy large amounts of data from Amazon S3 into HDFS where it can be processed by subsequent steps in your Amazon Elastic MapReduce (Amazon EMR) cluster. You can also use S3DistCp to copy data between Amazon S3 buckets or from HDFS to Amazon S3.Use argument ‘–groupBy,PATTERN’ to cause S3DistCp to concatenate input files whose names match the RegEx ‘PATTERN’ into a single target. Additional arguments such as ‘–targetSize,SIZE’, ‘–outputCodec,CODEC’ enable fine grained control of the results.last_img read more

Read More

Leadership Starts With One

first_imgGreat leaders have inspired millions of people throughout history. Likewise, today’s great business leaders at all levels motivate employees to transform their enterprises and help them reach new heights of accomplishment. They instill confidence that enables their followers to achieve what others might consider impossible.But it’s easy to forget or fail to note at all, that these leaders have one other thing in common: They all had to lead themselves before leading others.Leading oneself to inspire one’s own heart and discipline one’s own ego is the first step any great leader takes before embarking on a great leadership role. The backgrounds of all great leaders reveal struggles that molded their character, helping them conquer fears and doubts, and making them more passionate and resilient.Leadership DriveThe drive to achieve greater things starts with a fire that we light within ourselves. It starts in our core and becomes a reflection of our values. And it can ignite other fires.”Learn to lead yourself before leading others.”Think about it for a moment: How can we inspire others if we don’t inspire ourselves? How can we drive others to greatness if we can’t seek it within ourselves? How can we expect more from others who are willing to follow us if we don’t expect more from ourselves?We often forget that before we lead by example, we must exemplify to ourselves what we expect from others. These individual challenges are the most testing of all because we often have to go through them alone, and each learning experience is a battle.Leadership MindsetIn business, I hear people say that leadership is only for managers who have direct reports. This enduring attitude assumes that one can’t lead without a hierarchical relationship.However, we should treat every opportunity as something worthy of leadership regardless of our role, responsibility or position.”You don’t need to manage people in order to lead.”Why think of leadership as being like retirement, something we wait for when we are young and without rank? As in formal education and the pursuit of lifelong learning, it is never too early and never too late to lead.Leadership PresenceEventually, when you are trusted with a position of authority, the authentic test of leadership will be at the forefront. When hundreds or thousands look to you for direction and guidance, there will be no place to hide, no room for doubt and no time for experimenting.Professional golfers treat every practice putt like the one they need to win the tournament. Likewise, we must practice leadership at all times —and as early and as often as possible.”Leadership of the one, is more about leadership presence.”It is an attitude we practice in every role we are engaged in, in every business decision that comes in front of us, whether we are the ones making the decision or not—a luxury we will not have when we are in the driver’s seat.I believe strongly that leading the one is as important as leading many because all our journeys start there.Connect with me on Twitter @KaanTurnaliOpens in a new window, LinkedInOpens in a new window and here on the IT Peer Network.This story originally appeared on turnali.comOpens in a new window.last_img read more

Read More

NASC PDS Form New MPP and MiniBulker Pool

first_imgzoomIllustration. Image Courtesy: Pexels under CC0 Creative Commons license Sweden-based NovaAlgoma Short Sea Carriers (NASC) and Germany’s Peter Döhle Schiffahrts-KG (PDS) have decided to collaborate on the multi-purpose project vessel (MPP) and 13,500 to 15,000 mini-grabber dry-bulk markets.On September 26, the companies unveiled the DNA Shipping, a commercial agreement to pursue consolidation and growth within the two markets.The MPP vessels will be managed by PDS’s existing commercial management office located in Hamburg, Germany, while the bulkers will be managed from the NASC commercial office in Lugano, Switzerland. The joint venture partners’ other offices in Rotterdam, Miami, Houston, and Dubai are also expected to produce cargoes for the pool.While the business will be managed as two separate commercial fleets, the partners expect to exploit cargo synergies that exist across the MPP and bulker segments and to benefit from the shared best practices of the two companies. The joint venture will also reach out to owners of vessels in the MPP and mini-bulker markets who are in need of commercial management with the objective of expanding the fleet.The parties expect the joint venture to begin operation in October. It will comprise 26 vessels, including 13 MPP vessels and 13 mini-bulkers. NASC will contribute 12 vessels to the pools and PDS will contribute 14.“We believe the creation of this new agreement is an exciting first step in bringing consolidation to the fragmented MPP and mini-bulker markets,” said Ken Bloch Peter DnovaSoerensen, Executive Chairman of NASC.The new entity will result in the creation of the largest mini-grabber pool in the world, according to NASC.last_img read more

Read More

Carbon tax debates about to heat up with two major elections in

first_imgCALGARY (660 NEWS) – With two major elections coming up this year you will hear a lot about a carbon tax.Some groups say there are more effective ways to deal with climate change.“We have to start looking at other ways to combat climate change that isn’t going to harm our world-class industries like the oil and gas industries in Canada,” said President Mark Scholz with the Canadian Association of Oilwell Drilling Contractors.He argues there is a strong system in place in Canada to fight climate change.“We have to get behind our world-class regulatory systems.”He also takes issue with some rhetoric being thrown around.“I think it was very irresponsible of the federal government to indicate that our world-class regulatory systems somehow didn’t have the confidence of Canadians and we would argue they did.”Instead, Scholz would like to see something like a return to a levy on some of the heavier emitters like what we saw from 2007 until the carbon tax replaced the system.The money raised would then go to an innovation fund.“That tech fund drove some of the innovation of getting carbon out of the barrel to the tune of 30 to 40 per cent in quite frankly a very short period.”He argues that would mean our trade exposed industries would be better protected.Now if Jason Kenney and the UCP win the provincial election, he has promised to fight the federal government on the carbon tax issue, but we will also have to wait and see what happens in the federal election expected in the fall.He would be joined by other premiers around the country, including Doug Ford in Ontario and Scott Moe in Saskatchewan.The court battle between those provinces and the federal government is ongoing.Andrew Scheer and the Conservative Party have also been staunchly against the national carbon tax, but if Justin Trudeau’s Liberals win a majority or even a minority government, the policy is unlikely to change.last_img read more

Read More

Star Sports Greyhound Derby THURSDAY BLOG

first_imgSIMON NOTT: First Round heats (THURSDAY) The evening started with a real blow, the news that Sky wouldn’t be showing the Greyhound Derby Final. Still bad news doesn’t stop the show so it was heads down and get stuck into the real business of taking bets.DR SIMON: Fergal O’Brien’s social media maestro is a big greyhound fan so naturally we got his thoughts ahead of tonight’s first round.<span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span><span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span><span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span><span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span><span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span><span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span> HEAT 1Foxwood Boom winning at 25/1 was an excellent start.HEAT 2The second heat saw 2/5 shot Droopys Expert not only beaten but eliminated taking a single grand bet with us with it. The winner Droopys Giorgio was returned 7/2 but Ben laid a punter a grand at 6/4 without the favourite so sort of did it right while the backer would have been rueing his lack of pluck, he still got the better of the deal though.HEATS 3 & 4Lenson Blinder winning at 3/1 in the third heat was OK but the fourth heat drew a clenched toothed smile from Ben as jolly Borna Gin got up on the line and landed a couple of grand bets from ‘shrewdies’ though a similar lump on Clash eased the blow, just a little .HEAT 5Borna Account won heat five but in a race of modest bets. At 7/2 he put paid to joint jollies Forest West and Calico Ranger so small mercies were well received.HEAT 6Whoops Jack was warm opening at even money for heat six but eased to 11/10 as the dogs entered the traps which prompted a late flurry of money. The jolly was out and gone, Greenwell Jean finished very well but was never going to get there in time to save the book which included a £700 bet at 11/10.HEAT 7Calico Brandy from trap 1 was another jolly to oblige this time in heat 7 at 6/4, but only after a photo prompted after the fast-finishing Trade Fudge who nearly spoilt the party for the winners vocal fan club in front of the books.HEAT 8Heat 8 was the race of the night, a decent sized blanket could have covered six dogs at the line. The jolly and winner Rising Brandy fought off all comers to land some decent bets at 5/4 and induce the biggest roar of the evening, from the massed ranks of the Dartnell contingent.HEAT 9With one heat to go we were in prematurely relaxed mode with prices up. Still talking about the blinding previous race when a punter woke Ben up with a grand on Crossfield Will at 7/4.Then another waded in with a rouf £400 at 6/4 just before off. The final roar for winner trap four Crossfield Will was loud enough to freeze the blood of any bookie. An evening where circumstances and results went against us. When the going gets tough….MICK LIVESEY: Towcester’s Head Of Greyhound Racing Mick Livesey talks about his history in greyhound racing, writing for the ‘Sport’ newspaper, death threats in that job, his pride in Towcester’s dog track, the preparation that goes into the Greyhound Derby and his idea of the winner.<span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span><span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span><span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span><span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span><span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span><span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span> Simon Nott is author of:Skint Mob! Tales from the Betting RingCLICK HERE FOR MORE DETAILSlast_img read more

Read More