Traffic Advisory Colchester County

first_imgCOLCHESTER COUNTY: Plains Road Sections of Plains Road, from the intersection of Masstown Road west to Highway 4, will be reduced to one lane for repairs to the road and the bridge over the Debert River. Temporary traffic signals will direct traffic over the bridge. Motorists are asked to use caution through the construction zone. The project is scheduled to be finished Friday, Aug. 8. -30-last_img

Manitoba politician who made inappropriate remarks says hes learned his lesson

first_imgWINNIPEG — A Manitoba politician who was investigated for showing an assistant a picture of naked women and making inappropriate remarks says he has learned his lesson.Rick Wowchuk, a Progressive Conservative member of the legislature, admits he violated the legislature’s respectful workplace policy on five occasions, but would not discuss details.CBC News reported that Wowchuk’s former constituency assistant, who was not named, said the politician made a joke leading her to believe there were animal photos on his cellphone before he showed her a picture of naked women holding chainsaws.The woman also said Wowchuk called while he was in the bathtub and made comments about her wearing a bikini.Wowchuk was allowed to remain in the Tory caucus and was re-elected in his Swan River constituency in the Sept. 10 provincial election.Wowchuk says he has apologized and has undergone sensitivity training, which he says has made him a better person.The Canadian Presslast_img read more

Major Canadian brewer not amoosed by Vermont pub waging trademark battle

first_imgCanada’s largest independent brewery is once again locking horns with a smaller competitor in a trademark dispute over its moose-themed names and logos.Moosehead Breweries has filed a trademark infringement lawsuit against the Hop’n Moose Brewing Co. in Rutland, Vt., arguing that the brewpub’s similarity in name and logos could create confusion and damage the Moosehead brand.“As a family-owned company, Moosehead Breweries respects and supports the efforts of small business owners everywhere,” Moosehead said in an emailed statement.“It is, however, incumbent on us to protect our business interests, including our corporate trademarks.”In court documents, Moosehead said it owns multiple U.S. trademark registrations for the words “moose” and “moosehead” as well as images of the head and antlers of a moose.The New Brunswick-based brewery said its moose family of trademarks have been in use since at least the late 1970s in the United States in connection to its beer brands and consumer products, which include a variety of drinking glasses, apparel, posters, stickers, playing cards and pens.Hop’n Moose opened in 2014 and recently began canning its beer, which is sold in about 15 nearby stores. In court documents, Moosehead said that the Vermont brewpub has also been using its trademarks on promotional materials and related consumer products since the business first opened its doors.Moosehead said in the lawsuit that it had made repeated and numerous demands to Hop’n Moose owner Dale Patterson to cease using its trademarks, to no avail.“When faced with issues of trademark violation, litigation is always a last resort,” Moosehead said.“We always — as we did in this case — attempt to work out resolutions with any breweries considered to be violating Moosehead trademarks.”The brewer added, “we remain open to further discussions.”Patterson said Tuesday he hasn’t seen Moosehead’s lawsuit, but doesn’t want to change his logo.This isn’t the first time a small brewer has attracted the ire of Moosehead.Last year, Moosehead informed Regina’s District Brewery that it opposed the name of District’s flagship beer, Mues Knuckle, pronounced Moose Knuckle — again arguing the name is confusingly similar to its own and infringes on its trademark name.District Brewery president Jay Cooke said at the time there is no photo of a moose on its product and the name, which has a German spelling, distinguishes it from Moosehead.In 2002, Moosehead won its U.S. trademark dispute with an Idaho brewery looking to add the name ”moose” to one of its products. After a five-year battle, Grand Teton Brewing Co. discontinued production of its Moose Juice Stout.last_img read more

KMC mulls sprinklers for entire Eliot Park

first_imgKolkata: The Kolkata Municipal Corporation (KMC) has installed some sprinklers for watering the plants at Eliot Park in Maidan area and plans to have similar facilities for the entire park that spans around 1.2 km. The entire area will require around 40 such sprinklers. KMC has carried out a pilot project with a single sprinkler at Eliot Park, which has yielded good results.Southern Avenue in South Kolkata is the only other place in the city, where the civic body is using sprinklers for watering plants. Around 14 boulevards in Southern Avenue have sprinklers and each of them is capable of watering seven to eight trees on an average. The installation of such facilities entail an investment of Rs 40,000 per sprinkler. Also Read – Bengal family worships Muslim girl as Goddess Durga in Kumari PujaThe sprinklers are used twice a day for watering the plants. They are controlled in such a manner that the entire process of watering is carried out scientifically. A person has to simply press a button to switch them on and off. Apart from this, the civic body has 12 vehicles that water plants on a daily basis in the city. The gardeners working under the Parks and Gardens department of KMC carry out trimming of the trees from time to time, to ensure that the weight of the trees are adequate to remain rooted to the ground. Also Read – Bengal civic volunteer dies in road mishap on national highway”Trimming is necessary to ensure that the trees do not become too heavy and become susceptible to uprooting,” a senior official of the department said. The civic body uses hydraulic ladders for trimming big trees. For Borough 1 to 10 of KMC, there is a hydraulic ladder each for carrying out trimming whereas in Borough 10 to 16, KMC uses two such ladders for the purpose. “We have already initiated the process of procuring some more ladders for trimming,” the official added. It may be mentioned that soon after taking over as the Mayor of Kolkata a few months ago, Firhad Hakim has formed a separate department titled Urban Forestry and has laid a lot of emphasis on preserving and augmenting the green cover in the city. An expert committee has already been formed with Kumar as chairman, which will soon come out with a comprehensive plan on how to move forward with urban forestry in the city. Apart from botanists of KMC, the committee has environment experts from Calcutta University and Jadavpur University as well.last_img read more

UN agencies step up aid as heavy rains continue to ravage Bolivia

Almost 19,000 families – or 94,000 people – have been affected by the floods and subsequent landslides, said the UN Office for the Coordination of Humanitarian Affairs (OCHA), citing statistics from the Bolivian Government, which declared a state of emergency last month.This is more than double the number of affected families reported two weeks ago. Some 17 people have been killed and three are reported missing.The UN World Food Programme (WFP) has authorized the distribution of 61,520 rations to feed over 3,000 families in five affected departments, while a UN Emergency Technical Team is providing assistance to Government agencies and stands poised to respond to further needs should they arise. 2 February 2007United Nations agencies are stepping up assistance to Bolivia, where El Niño-induced heavy rains have caused over a dozen deaths and continue to displace civilians. read more

UN human rights chief welcomes Rwandas abolition of death penalty

11 December 2007The United Nations High Commissioner for Human Rights today lauded the abolition of the death penalty in Rwanda. Along with Gabon, which also recently decided to ban the practice, Rwanda joins “the vast majority of UN Member States that have already done so,” Louise Arbour told the Human Rights Council, currently in its sixth session in Geneva.“Meanwhile, it is important to reiterate that where the death penalty still exists, its use should conform to restrictive international standards,” she added.The High Commissioner also welcomed the broad support for a General Assembly initiative calling for a global moratorium on executions with a view to abolishing them entirely.Last month, the Assembly’s third committee, which deals with human rights issues, voted 99 to 52, with 33 abstentions, in favour of the resolution, which states “that there is no conclusive evidence of the death penalty’s deterrent value and that any miscarriage or failure of justice in the death penalty’s implementation is irreversible and irreparable.”That resolution will now go before the full 192-member Assembly for a vote this month. All Assembly resolutions are non-binding.In her address to the 47-member Council today, Ms. Arbour also spoke about her latest visit to Sri Lanka, where she focused on the issue of abductions and disappearances, which have been reported in alarming numbers over the past two years.She also mentioned her first trip to Afghanistan in two years, and voiced concern at the country’s limited progress on women’s rights.On Sudan, the High Commissioner drew attention to the serious and ongoing violations of international human rights and humanitarian law, especially in the war-wracked Darfur region. “More needs to be done urgently by the Government and the international community to extend adequate protection to civilians,” she said. read more

Three dead and over 25 injured in twin accidents

Three people were killed and over 25 others sustained injuries following twin accidents today.In one accident a bus collided with a van in Ampara this morning killing three people and injuring 10 others. In the other accident a bus skidded off the road in Dambulla and met with an accident injuring 15 passengers.The bus was operating from Polonnaruwa to Kandy when the accident took place. (Colombo Gazette)

India Sri Lanka discuss military training bilateral cooperation

Captain Ashok Rao, Defence Advisor to the High Commission of India paid a courtesy call on the Commander of the Army, Lieutenant General Mahesh Senanayake at the Army Headquarters today.The cordial meeting that ensued at the Commander’s office focused on matters of bilateral importance and cooperation, mutual training programmes vis-a-vis, capacity-building modules, etc. They also recalled sound and historic relations that exist between both organizations and underlined the need to foster such ties further. Lieutenant Colonel G.S. Klair, Assistant Defence Adviser of the High Commission of India and Colonel Udaya Kumara, Military Assistant (MA) to the Army Commander also attended the meeting. (Colombo Gazette) Towards the end of the meeting, both Commander of the Army and the visiting Defence Advisor exchanged mementos as symbols of goodwill and understanding. read more

Football Ohio State lacks frontrunner at quarterback with Spring Game on horizon

Then-redshirt sophomore quarterback Joe Burrow (10) runs the ball in the fourth quarter against Nebraska in Memorial Stadium on Oct. 14. Ohio State won 56-0. Credit: Jack Westerheide | Photo EditorDwayne Haskins and Joe Burrow have been through this before — just last year, in fact.Last March, the quarterbacks went back and forth, battling each practice to become J.T. Barrett’s backup. Neither won. They both played well in the Spring Game and entered the summer tied in the race to be second in line for playing time.But a broken hand near the end of fall camp submerged Burrow’s chances of becoming Barrett’s backup and handed Haskins the backup role.Even when Burrow returned to action, he entered after Haskins late in blowout games. Then later in the year, Haskins took advantage of the opportunity and completed a comeback win in Ann Arbor, Michigan, against the Wolverines after being thrust into action in the second half after Barrett suffered an injury.Despite this season’s quarterback battle having immeasurably higher stakes than last year’s, Haskins and Burrow seem to be stuck in the same place — a tie. But this time, they are tied atop the depth chart. And this year, a third quarterback — redshirt freshman Tate Martell — has engaged them in competition.Head coach Urban Meyer said he is taking the quarterback competition “day-by-day.” He said Martell had a better practice Monday, Burrow had a better performance in an intrasquad scrimmage Saturday, then Haskins “came back” and played better.Despite the offense not having a set leader at quarterback, Meyer seemed unconcerned with the ongoing uncertainty behind center.“You’d wish one would take it,” Meyer said. “But then again, you like having the day-to-day competition, which is what I’m seeing.”In order for someone to separate themselves and earn the starting job, Meyer said Haskins, Burrow or Martell have one simple job: to “lead the team.” Thus far, he feels that has not happened.“There’s got to be a separation at some point, and right now there is not that separation,” Meyer said. “Just when one starts going, the other one comes up, and the other one drops a little bit.”As each practice passes and no quarterback separates himself from the pack, the Buckeyes get one day nearer to ending spring practice without naming a starter. Ohio State has just five more practices in the next two weeks before playing the Spring Game on April 14 at Ohio Stadium. If Meyer does not name a starter, it would be the first spring game since 2015 that Ohio State does not have a quarterback cemented atop the depth chart. Meyer said he does not know whether he will have a starter by the end of spring.However, the timeline has major implications on the position for Ohio State. Both Haskins and Burrow have shown, in limited time, they possess the requisite skills to start at a major college football program. But Burrow could transfer if Haskins, the assumed favorite, earns the nod.Burrow is on track to graduate from the university in May, meaning he would not have to sit out if he decided to transfer like most college athletes. Instead of backing up Haskins or Martell, he could transfer and compete for a starting job on a team without Ohio State’s high-end talent at quarterback. A month ago, Meyer said his first obligation is Ohio State, but “probably, yes” he has an obligation to tell Burrow his status by the end of spring practice. Given the lack of separation between the three quarterbacks, the probability of Meyer not offering Burrow a solid answer by the time he graduates seems more possible than ever.The last time Meyer did not make a decision on quarterback after spring practice, Ohio State’s disastrous circus of Cardale Jones and J.T. Barrett flip-flopping starts and snaps led to an offense that could never seem to find a rhythm in 2015. Meyer said he has moved on from that situation and did not learn anything from it. But the clock is ticking, and the day-to-day competition that Meyer enjoys will end after the Spring Game in less than two weeks. Without naming a starter, Ohio State will not have a leader behind center during the summer with whom the offense can build a rapport. That might not worry Meyer yet, but the uncertainty could do irreparable damage if he does not make a decision until the fall. read more

EnergyLogic highlights waste oil and temperature regulation solutions for mines

first_imgEnergyLogic supplies waste oil heaters and boilers, which it says present an opportunity for “clean, safe used oil disposal” of waste oil from earthmoving equipment. It also supplies high volume, low speed fans that can provide additional energy savings and help regulate temperatures in mine maintenance facilities. The company will be showcasing these solutions at Booth #21194 at MINExpo 2012 and states: “Setting up self-sufficient, self-contained mining camps is a massive task that brings unique challenges for energy management, such as used oil disposal and maintaining a comfortable, productive environment. Waste oil heating systems and high volume, low speed fans can solve several important operational challenges while saving money and improving comfort.”Waste oil heaters and boilers provide EPA-approved, safe onsite recycling of the used oil produced by the earth moving equipment and service vehicles. Using this byproduct as an energy source means no other energy source, such as propane and diesel, needs to be brought in. Recycling the used oil onsite also relieves the disposal burden. “While many mining operations are already utilising used oil as an energy source, we are excited to expose these concepts to a broader mining population,” said Robert Stevens, President and CEO of EnergyLogic. “This on-hand energy source can provide potable hot water that can be used for showers, ice melt, radiant heat, cleaning and processing minerals or as a supplement to HVAC. And it minimises the impact on the environment.”EnergyLogic states that its systems are the only waste oil furnaces on the market that incorporate intelligence into the system. New in September 2012, EnergyLogic systems can include a SmartStat (patent pending), which is a programmable thermostat that includes system diagnostics. This device monitors system conditions, such as low fuel and vacuum pressure, and triggers actions by the furnace to prevent system failures.High volume, low speed (HVLS) fans can help keep maintenance facilities cool in the summer and warm in the winter. MacroAir, the originator of HVLS technology, produces a variety of HVLS fans to meet the industrial needs of a mining operation. These HVLS fans circulate air for a gentle, natural cooling effect in warmer months. In cooler months, they destratify hot air and push the warm air down so it can be felt and appreciated. Their gentle air circulation pushes air beneath large objects, such as earthmoving equipment, quickly drying floors. As an option, from a single control pad, customers can operate up to 30 fans at once. The fully-integrated LCD touchscreen panel gives the ability to control the speed and change the direction of up to thirty MacroAir fans from a single, convenient location. They can be operated individually or synchronised to operate as a unit. The control panel also includes real-time feedback on energy use.last_img read more

Fewer than 5 per cent of the worlds languages exist online

first_imgFEWER THAN FIVE per cent of more than 7,000 spoken languages can be found online and more than half could be under threat.The study, which was done by András Kornai and is titled Digital Language Death, measured how many of the world’s languages were used on the web.To do this, he designed a program that crawled top-level web domains and tracked the number of words used in each language.It also analysed Wikipedia pages, which was a key factor in telling how popular a language is, as well as language tools like spell checkers and operating systems.The result was that, at best, five per cent of the world’s languages exist online, and the study estimated that 2,500 will survive for another century in spoken form at least. It’s estimated that at least half of those languages are currently under threat and could end up disappearing by the end of the century.The study identifies three main signs that suggest a language is under threat. The first is loss of function where the language is used less for day-to-day tasks like commerce, the second is loss of prestige where a language loses importance among younger generations, and the third is loss of competence where speakers adapt a simplified version of the language.Read: Language Commissioner faces questions over ‘backwards’ year for Irish language >Read: 8 words that have lost all meaning in modern life >last_img read more

Melbourne scientist wins gold in medical Olympics

first_imgA Melbourne scientist won a gold medal at the First International Meeting of Medical Olympics Contest in Thessaloniki for research her laboratory has done in the use of natural compounds, such as olive oil and cinnamon, in treating diabetes and cancer. Katherine Ververis was chosen to represent her laboratory at the conference where she presented five papers on various research conducted at the Epigenomic Medicine Laboratory at Baker IDI Heart and Diabetes Institute. The laboratory won a gold medal and three silvers for their work. The aim of the conference, which was held in Thessaloniki from 23 to 25 September, was to present original papers of social interest in the medical field. “I was really excited and happy for the lab and wanted to share the news with them,” said the 23-year-old, telling Neos Kosmos how she felt after she found out her laboratory had won the medals. “Everyone worked really hard on these projects so it’s nice when you’re rewarded for your work.” Ververis, who is currently completing her PhD, assisted in the research which has found that cinnamon has anti-inflammatory aspects and can help “our cells divide and make new cells”. Facebook Twitter: @NeosKosmos Instagramlast_img read more

Dungeons Dragons and a Demon Hunter Save the Legends of Tomorrow

first_imgStay on target When Mallus was revealed to be dormant inside Sara, we knew an episode like this was coming. Though maybe not as scary as I’d have liked, Legends of Tomorrow put together a fun hour of superhero horror here. The dark corridors of the malfunctioning Waverider suit it well. As for the monster, you couldn’t ask for a more formidable threat than Sara Lance. The Legends have the Death Totem, and they know they need to keep it locked up. How’s that working out for them? Not well, it looks like. More than the box it’s kept in ominously shaking at the end of last week’s episode, Sara’s having some bad dreams. She walks through a version of the Waverider where the lights are out and someone left a fog machine on. She’s greeted by a creepy little girl who tells her that someone forgot about her.The bad dream does lead to a very sweet scene between Sara and Ava. I like that the show is giving them a normal, loving relationship. Ava even gets a little jealous when she hears about Constantine and Sara’s ’60s fling a few episodes back. But Sara gets out of it by reminding her it was before she had a girlfriend. They’re calling themselves girlfriends now. That’s a big step for Sara. Unfortunately, the moment is interrupted by a severe time crack. The anachronisms have gotten worse. Much worse. The team splits up to take care of a few at once, and each one of their individual missions sound like they’d make a pretty great episode. I want to see the Mona Lisa on Antiques Road Show.Nick Zano as Nate Heywood/Steel (Photo: Dean Buscher/The CW)But this episode isn’t about silly time travel goofs. I guess the show figured it needed a break from those for a while. Instead, Sara is still haunted by the voice in her head from the bad dream. It encourages her to open the Death Totem box and put it on. It convinces her that she’s the totem’s bearer. Yeah, we all know something is lying to her. Amaya and Zari return from their adventure to find Ray injured in his cold fusion experiment. Gideon is strangely offline, and Sara is now in full-face Halloween makeup. She tries to kill the team, but Wally runs them out of the room just in time. At least the show is actually using him now.It turns out Mallus tricked Sara into putting on the Death Totem. Since she had the demon locked inside her, the totem allowed it to take over her body. On her own, Mallus-possessed Sara isn’t all that scary. It’s hard to be when she looks like the Goodwill version of the Grim Reaper from Bill and Ted’s Bogus Journey. The horror of this episode is a direct result of this season’s great character work. It’s nerve-wracking to watch her torment our favorite superheroes the way she does. She takes the form of Jesse Quick and taunts Wally over their failed relationship. And his mother’s drug addiction. That’s real low. Though Wally puts on a brave face, that momentary distraction allows Mallus to get the jump on the speedster.Caity Lotz as Death Witch (Photo: The CW)We knew the Legends were going to have to fight a possessed Sara at some point. And yeah, the situation is about as hopeless as you’d imagine it would be. They’re going to need some help. Fortunately, Ava Sharpe is one the case, and she knows just who to call. It’s John Constantine! God, Matt Ryan’s version of the character deserved so much more than a single season on NBC. The network that couldn’t include the character’s bisexuality or his chain-smoking. At least his upcoming animated spin-off looks good. Unfortunately, his attempt to rescue Sara doesn’t go so well. Mallus is too powerful for his candle ritual, causing an explosion to knock everyone back.Fortunately, Sara may be able to fight back herself. After taunting Zari with visions of her brother, she is able to fight the demon for a second. In the end, the demon takes over again, but it might have saved Zari’s life. That’s when the episodes geekiest sensibilities shine through. I recognize this next bit is pandering, but it’s very effective pandering. Ava’s assistant draws a parallel between the situation the Legends are currently in and his Dungeons and Dragons group. He even has perfect character descriptions for each member of the Legends. His group’s campaign was conveniently similar to the situation the Legends find themselves in now, which gives him an idea for a solution. First, The Legends can’t split up. They immediately split up. Second, Constantine needs to locate the Death Totem so he and the Time Bureau can locate the Waverider. That has a much higher chance of happening.Maisie Richardson- Sellers as Amaya Jiwe/Vixen and Dominic Purcell as Mick Rory/Heat Wave (Photo: Dean Buscher/The CW)They show up just as Sara is beating the crap out of Nate while disguised as his disappointed grandfather. She’s just playing all the traumatic hits this episode, huh? But try as he might, even with a clever trick up his sleeve, even Constantine can’t cast Mallus out. That takes Ava. Admittedly, with an assist from Mick Rory, who turns out to be the Fire Totem bearer. Is it predictable? Yes. Does it look cool? Oh hell yes. In the end though, as Sara is offered a final choice between power under Mallus’ control or a free life full of pain and regret, it’s Ava who pulls her through. The scene is really well done. Emotional and genuine without coming across as cheesy.That’s why it really sucks that the episode ends with her breaking up with Ava. I know we have to have drama and emotional turmoil, but come on. You can’t have Ava literally call Sara back from a dark abyss only to break them up at the end. Sara and Ava were happy together. It was the one bit of unqualified cuteness on board this ship. And now it’s gone because Sara is afraid of the darkness that’s inside her. This journey has made her realize that it’s always been there and she thinks Ava deserves better. It’s a mistake. We can only hope that by the end of the season, Sara realizes that.Matt Ryan as Constantine, Adam Tsekhman as Agent Gary Green and Jes Macallan as Ava Sharpe (Photo: Dean Buscher/The CW)Though it had some nice dramatic moments, it wasn’t quite as scary as the Legends’ last outing with Constantine and Mallus. It was definitely going for horror, but not all of it landed. Even so, it was still a fine episode. Heroes fighting each other is and always will be extremely good comic book stuff. Plus, it gave Sara an excuse to show off her martial arts skills. That always makes for fun TV. Even better, it looks like Constantine is sticking around for a little while. How much time he’ll spend with the Legends and how much he’ll spend with the Time Bureau’s D&D group remains to be seen. But one thing is for certain. We should all aspire to have a d20 roll as dramatic as John Constantine’s. DC TV Comes to NYCC, Joaquin Phoenix’s Joker Grin & More DC NewsOur Favorite TV Superheroes Ranked center_img Let us know what you like about Geek by taking our survey.last_img read more

Stephen Kings The Stand on its way to CBS All Access

first_img Sprint Related stories Tags King expressed excitement about the new series. “The people involved are men and women who know exactly what they’re doing; the scripts are dynamite,” he said in a statement. “The result bids to be something memorable and thrilling. I believe it will take viewers away to a world they hope will never happen.”Writers Josh Boone and Ben Cavell have been working on the series for years, and Boone will also direct.No premiere date was given for the series. The most terrifying Stephen King creations of all time $210 at Best Buy $261 at Daily Steals via Google Express Tidal 3-month family subscription: $5.99 (save $54) Post a comment CNET may get a commission from retail offers. Share your voice See It $999 Read the Rylo camera preview A new Stephen King series is coming to CBS All Access.  Lou Rocco/ABC via Getty Images Stephen King’s gigantic 1978 post-apocalyptic horror novel The Stand has been dramatized before, but a new 10-episode series based on the best-seller is coming to streaming service CBS All Access. (Disclosure: CBS is CNET’s parent company.) In the book, almost all of the world’s population dies from a weaponized influenza known as Captain Trips, and society breaks down, with supernatural evil being Randall Flagg egging on the violence and destruction, and 108-year-old Mother Abagail leading the good guys. JBL Soundgear wearable speaker: $90 (save $160) Boost Mobile Turo: Save $30 on any car rental Spotify and most other streaming services rely on compressed audio, which robs the listener of full fidelity. Enter Tidal, the only “major” service that delivers lossless audio — meaning at least on par with CD quality, if not better. Want to see (er, hear) the difference for yourself? Grab this excellent extended trial while you can. It’s just $6 for three months, and it’s good for up to six listeners. $60 at Best Buy Recently updated to include digital-photo-frame capabilities, the Lenovo Smart Clock brings Google Assistant goodness to your nightstand. It’s a little smaller than the Amazon Echo Show 5, but also a full $30 less (and tied with Prime Day pricing) during this Best Buy Labor Day sale. I thought this might be a mistake, but, no, the weirdly named HP Laptop 15t Value is indeed quite the value at this price. Specs include an Intel Core i7 processor, 12GB of RAM, a 256GB solid-state drive and a 15.6-inch display. However, I strongly recommend paying an extra $50 to upgrade that display to FHD (1,920×1,080), because you’re not likely to be happy with the native 1,366×768 resolution. $999 Though not technically a Labor Day sale, it’s happening during Labor Day sale season — and it’s too good not to share. Nationwide Distributors, via Google Express, has just about the best AirPods deal we’ve seen (when you apply promo code ZBEDWZ at checkout). This is for the second-gen AirPods with the wireless charging case. Can’t imagine these will last long at this price, so if you’re interested, act fast. Free Echo Dot with an Insignia or Toshiba TV (save $50) Chris Monroe/CNET Other Labor Day sales you should check out Best Buy: In addition to some pretty solid MacBook deals that have been running for about a week already, Best Buy is offering up to 40% off major appliances like washers, dryers and stoves. There are also gift cards available with the purchase of select appliances. See it at Best BuyDell: Through Aug. 28, Dell is offering an extra 12% off various laptops, desktops and electronics. And check back starting Aug. 29 for a big batch of Labor Day doorbusters. See it at DellGlassesUSA: Aug. 29 – Sept. 3 only, you can save 65% on all frames with promo code labor65. See it at GlassesUSALenovo: The tech company is offering a large assortment of deals and doorbusters through Labor Day, with the promise of up to 56% off certain items — including, at this writing, the IdeaPad 730S laptop for $700 (save $300).See it at LenovoLensabl: Want to keep the frames you already love and paid for? Lensabl lets you mail them in for new lenses, based on your prescription. From now through Sept. 2 only, you can save 20% on the blue light-blocking lens option with promo code BLOCKBLUE. See it at LensablSears: Between now and Sept. 7, you can save up to 40% on appliances (plus an additional 10% if you shop online), up to 60% on mattresses, up to 50% on Craftsman products and more. The store is also offering some fairly hefty cashback bonuses. See it at SearsNote: This post was published previously and is continuously updated with new information.CNET’s Cheapskate scours the web for great deals on tech products and much more. For the latest deals and updates, follow the Cheapskate on Facebook and Twitter. Questions about the Cheapskate blog? Find the answers on our FAQ page, and find more great buys on the CNET Deals page. HP Laptop 15t Value: $520 (save $780) Best laptops for college students: We’ve got an affordable laptop for every student. Best live TV streaming services: Ditch your cable company but keep the live channels and DVR. Tags Turo Amazon See it 0 $299 at Amazon $59 at eBay Read Google Home Hub review TV and Movies Read the AirPods review Best Buy See at Turo Lenovo Smart Clock: $59.99 (save $20) $6 at Tidal See at Amazon TVs Speakers Mobile Accessories Cameras Laptops Automobiles Smart Speakers & Displays Sarah Tew/CNET An Echo Dot makes a fine match for any Fire edition TV, because you can use the latter to say things like, “Alexa, turn on the TV.” Right now, the 24-inch Insignia Fire TV Edition starts at just $100, while the 32-inch Toshiba Fire TV Editions is on sale for $130. Just add any Fire TV Edition to your cart, then add a third-gen Echo Dot, and presto: The latter is free. Apple iPhone XS Angela Lang/CNET Rylo 5.8K 360 Video Camera: $250 (save $250) $999 What’s cooler: A snapshot of a firework exploding in front of you, or full 360-degree video of all the fireworks and all the reactions to seeing them? Oooh, ahhh, indeed. At $250, the compact Rylo dual-lens camera is selling for its lowest price yet. And for an extra $50, you can get the bundle that includes the waterproof housing.This deal runs through Sept. 3; it usually costs $500. Rylo Mentioned Above Apple iPhone XS (64GB, space gray) Use promo code 19LABOR10 to get an unusually good deal on JBL’s interesting hybrid product — not quite headphones, and not quite a traditional speaker, but something you wear like neckphones to listen to music on the go. $999 Lenovo 130-15AST 15.6-inch laptop: $210 (save $90) Share your voice See It Sarah Tew/CNET Sarah Tew/CNET Turo is kind of like Uber meets Airbnb: You borrow someone’s car, but you do all the driving. I’ve used it many times and found it a great alternative to traditional car-rental services — in part because you get to choose exactly the vehicle you want (not just, say, “midsize”) and in part because you can often do pickup and dropoff right outside baggage claim.Between now and Sept. 1, the first 300 people to check out can get $30 off any Turo rental with promo code LDW30. Read Lenovo Smart Clock review $155 at Google Express ‘The Mist’ review: Stephen is still King in unnerving new series Creepy ‘what’s in the box’ ending to ‘Se7en’ was an accident Read DJI Osmo Action preview Google Nest Hub: $59 (save $70) 10 Photos Review • iPhone XS review, updated: A few luxury upgrades over the XR The problem with most entry-level laptops: They come with mechanical hard drives. That makes for a mighty slow Windows experience. This Lenovo model features a 128GB solid-state drive, so it should be pretty quick to boot and load software, even with its basic processor. Plus, it has a DVD-burner! That’s not something you see in many modern laptops, especially at this price. See It Preview • iPhone XS is the new $1,000 iPhone X $90 at Daily Steals via Google Express $520 at HP Formerly known as the Google Home Hub, Google’s Nest Hub packs a wealth of Google Assistant goodness into a 7-inch screen. At $59, this is within a buck of the best price we’ve seen. It lists for $129 and sells elsewhere in the $89-to-$99 range.This is one item of many available as part of eBay’s Labor Day Sale (which, at this writing, doesn’t specifically mention Labor Day, but that’s how it was pitched to us). DJI’s answer to GoPro’s action cameras is rugged little model that’s shockproof, dustproof and waterproof down to 11 meters. It normally runs $350, but this deal drops it to $261 when you apply promo code 19LABOR10 at checkout. Apple AirPods with Wireless Charging Case: $155 (save $45) DJI Osmo Action camera: $261 (save $89) Comments 7 CBS,I’m shocked — shocked! — to learn that stores are turning Labor Day into an excuse to sell stuff. Wait — no, I’m not. As much as I respect the original intent of the holiday (which became official back in 1894), to most of us, it’s just a bonus day off — one that’s blissfully tacked onto a weekend. So, yeah, stores; go ahead, run your sales. I’m listening. Perhaps unsurprisingly, Labor Day doesn’t bring out bargains to compete with the likes of Black Friday (which will be here before you know it), but there are definitely some sales worth your time.For example:We’ve rounded up the best Labor Day mattress deals.We’ve also gathered the best Labor Day laptop deals at Best Buy.The 2019 Vizio P Series Quantum is back under $999.Be sure to check out Amazon’s roughly three dozen Labor Day deals on TVs and audio. Google Express is having a big sale as well, one that includes deals on game consoles, AirPods, iPhones, laptops and more.Below I’ve rounded up a handful of individual items I consider to be the cream of the crop, followed by a handy reference guide to other Labor Day sales. Keep in mind, of course, that products may sell out at any time, even if the sale itself is still running. Note that CNET may get a share of revenue from the sale of the products featured on this page. Sarah Tew/CNET The Cheapskatelast_img read more

AP govt to host summit to attract investments on Aug 9

first_imgAmaravati: In an effort to attract investments after coming into power, the YSRCP government is hosting an investment summit in Vijayawada on August 9. The event is conducted in coordination with the Ministry of External Affairs, which will be attended by ambassadors, diplomats, Consul-Generals and other representatives. The Ministry has already sent invitations to 30 to 40 countries across the world. In this context, the representatives from 26 countries will participate in the conference. Recently, CM YS Jagan Mohan Reddy’s government has passed a bill to provide 75% of employment opportunities to locals in industries and companies in the state.last_img read more

Attorney General Jeff Sessions Vows To Prosecute All Illegal Border Crossers And

first_img Share Julián Aguilar/The Texas TribuneBorder Patrol agents in New Mexico on April 9, 2018.U.S. Attorney General Jeff Sessions announced Monday that the Justice Department will begin prosecuting every person who illegally crosses into the United States along the Southwest border, a hard-line policy shift expected to focus in particular on migrants traveling with children.In a speech to law enforcement officials in Scottsdale, Arizona, Sessions said the Department of Homeland Security will begin referring such cases to the Justice Department for prosecution and that federal prosecutors will “take on as many of those cases as humanly possible until we get to 100 percent.”“If you cross the Southwest border unlawfully, then we will prosecute you,” Sessions said, according to a transcript of his remarks. “If you smuggle illegal aliens across our border, then we will prosecute you. If you are smuggling a child, then we will prosecute you and that child will be separated from you as required by law. If you don’t like that, then don’t smuggle children over our border.”DHS officials say they have seen a significant increase in illegal border crossings over the past year, including a rise in the number of families and unaccompanied children. In the past month, Border Patrol officers say they have encountered more than 50,000 immigrants trying to enter the United States. From April 2017 to April 2018, the number of apprehensions and “inadmissible” border crossings tripled, according to DHS.Advocates for migrants have said that most are fleeing violence in Central America and should be treated as asylum seekers, not criminals.Senior immigration and border officials called for the increased prosecutions last month in a confidential memo to Homeland Security Secretary Kirstjen Nielsen. They said filing criminal charges against migrants, including parents traveling with children, would be the “most effective” way to tamp down on illegal border crossings.The so-called “zero-tolerance” measure announced Monday could split up thousands of families because children are not allowed in criminal jails. Until now, most families apprehended crossing the border illegally have been released to await civil deportation hearings.The Trump administration piloted this approach in the Border Patrol’s El Paso sector, which includes New Mexico, between July and November 2017, and said the number of families attempting to cross illegally plunged by 64 percent.The New York Times reported last month that hundreds of children have been taken from their parents at the border since October.Nielsen told lawmakers in April that DHS aims to keep families together “as long as operationally possible.” She said families are separated to “protect the children” in case the adults traveling with them are not really their parents.Sessions, who as attorney general has been especially aggressive on immigration, said that to carry out the new enforcement policies, he was sending 35 prosecutors to the Southwest and 18 immigration judges to the border to handle asylum claims. Those moves were first announced last week.Criminal prosecutions at the border have soared over the past two decades, from fewer than 10,000 cases in 1996 to more than 90,000 at their peak in 2013 under former President Barack Obama, according to TRAC, a Syracuse University organization that tracks criminal immigration prosecutions. Last fiscal year, the number of immigration prosecutions declined 14 percent, to nearly 60,000.The most common criminal charge is “improper entry by alien” — or illegal entry. First-time offenders usually face a federal misdemeanor punishable by up to six months in prison or fines. Repeat offenders can be imprisoned for up to two years and fined, or charged with the more serious offense of “illegal reentry.”After President Trump called for renewed efforts to tamp down on illegal border crossings last month, Sessions ordered U.S. attorneys on the border to prosecute migrants “to the extent practicable.” His remarks Monday appeared to signal that federal prosecutors will make this a higher priority.“Eleven million people are already here illegally,” Sessions said in his speech. “That’s more than the population of the state of Georgia. … We’re not going to stand for this. We are not going to let this country be invaded. We will not be stampeded. We will not capitulate to lawlessness.”last_img read more

Regular Show 40 ends on a high note

first_img ‘Teen Titans Go! Vs. Teen Titans’ Trailer Doesn’t Anger Me’Space Ghost Coast to Coast’ Is Still Influential and Funny Licensed property comic books are interesting. Many feel as if they exist solely as another means of promoting the source material, but BOOM! has been on a roll with them. Adventure Time, Power Rangers, and Steven Universe are just a few of the properties they’re publishing, and every series has been wonderful. Each book has managed to really capture the heart and spirit of its source, while also bringing something different and fresh to them. Regular Show is no different. And for the past 40 issues (over three years), it has been a great comic for both fans of the Cartoon Network show and anyone who loves light-hearted  outrageous fiction.The 40th and final issue, written by Mad Rupert and drawn by Laura Powell, wraps up the “Apocalypse Benson” arc that began back in issue #37. Having the final arc of the comic series focus on Benson seemed like a weird choice to me, but then again there is only so much that the comic can do with Mordecai and Rigby. While this arc has been really enjoyable up until this point, the finale serves as a really great character piece for Benson, a character they clearly  have a lot of affection for. It was surprising to see so much done with him, as he rarely gets time in the spotlight.Laura Powell’s art, and Lisa Moore’s colors, really make this book feel like the show. The characters all look spot on, and it has that perfect amount of animated energy. The script reads just like the characters’ voices, playing out in my head as if I was watching the cartoon. This isn’t easy to do, but this comic has a great understanding of the characters and their perspectives. Regular Show is a very formulaic series, so it is important to get the characters right.The back-up feature, by Christine Larsen, also focuses on Benson. On vacation and fishing by himself, he winds up in way over his head as he finds himself on an adventure with Norse gods. It is the perfect breather story after the final arc stretched several issues. Cute, simple and exciting, it compliments the Benson-centric main story. The gumball machine man is a curmudgeon with a heart of gold, who just wants to live quiet life, but constantly getting wrapped up in other people’s messes. Again, this story arc, and especially this issue, has such an affinity for him.With the Regular Show cartoon having recently announced they will be ending after its eighth season (which began airing this past month), it is no wonder that the comic had to come to an end as well. It’s going out on a weird and unexpectedly high note, focusing on a forgotten and underutilized character. But in the end, it is more about Benson’s relationship with the park, and how the park brings everyone together. This frames it all in a new way, proving Benson was in fact the heart of the book. It’s an intriguing way to look at the character and a really strong way to finish the series. The final issue Regular Show #40, is available to download at Comixology and in your local comic shop today. Stay on targetlast_img read more

Oldest Caspian Horse remains discovered in Iran

first_img This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. (PhysOrg.com) — The Caspian Horse, also known as the “King’s Horse” or the Mazandaran horse, is the oldest breed of horse still in existence. The newest discovery of remains makes it even older than originally believed. Caspian horses are much smaller than the typical race horse and are an average 11 hands. They had been believed to be extinct until 1965 when Louise Firouz, an American wife of an Iranian aristocrat, found a wild herd in the Iranian mountains just south of the Caspian Sea. Citation: Oldest Caspian Horse remains discovered in Iran (2011, May 4) retrieved 18 August 2019 from https://phys.org/news/2011-05-oldest-caspian-horse-iran.html The Caspian Horse. Image: Kerri-Jo Stewart from Vancouver, Canada / via Wikipedia Mystery about domestication of horse has been unravelled — now location and time are proofed More information: en.wikipedia.org/wiki/Caspian_horse © 2010 PhysOrg.com In a report from the Circle of Ancient Iranian Studies, archaeologist Ali Mahforuzi discusses the findings at the Gohar Tappeh site located in the Mazandaran province of Iran between the cities of Neka and Behshahr.In their eighth year at this site, they have discovered the remains of the Caspian in a cemetery which dates back to the late Bronze to early Iron Age or around 3400 BCE. The Caspian Horse was a status symbol in ancient Iran and routinely presented to kings and queens. They were used as a horse for chariot racing as well as in battle. It does not come as a surprise to the archaeologists that the Caspian was found within a cemetery. This was common in ancient burials and shows the importance and value that was put on these horses.Mahforuzi and his team have discovered numerous architectural structures, as well as graves with various different burial methods, suggesting a continual life in this region for many generations. The oldest finding at the site currently dates back to the Neolithic age some 14,000 years ago. Explore furtherlast_img read more

Hill Anganwadi workers seek hike in pay retirement benefits

first_imgDarjeeling: The Anganwadi workers and helpers of Darjeeling Hills have demanded an increment in their monthly remuneration. Along with this, they have demanded retirement benefits as well. The Shishu Bikash Anganwadi Karmi Sahakarmi Sangathan, an association of Anganwadi workers and helpers, recently sat in a meeting with the secretary and officials of the Social Welfare department, government of West Bengal.”Increment of our remuneration has been our long-standing demand, especially in the Hills where the terrain is tough. The meeting held in Kolkata ended on a positive note, with the department agreeing to many of our demands, including increment and retirement benefits,” stated Mani Kumar Rai, president of the association. Also Read – Heavy rain hits traffic, flightsAt present, Anganwadi workers are paid a Rs 4,800 monthly stipend, while helpers receive Rs 3,300 in West Bengal.”In many states, workers receive Rs 10,000 and helpers Rs 5,000. They also receive Rs 1 lakh on retirement and in case of death during service period, the next of kin receive Rs 2 lakh. All these benefits are not given to us in West Bengal,” added Rai.However, the association is banking their hopes on the assurances given by the officials at the Kolkata meeting. “Along with the increment, they have assured us that during retirement, an amount equivalent to 2 years of salary will be given. Those who have served as workers for more than 10 years will be upgraded to supervisors. Question papers in Nepali will also be made available for departmental exams,” stated Asha Hingmang, treasurer of the association.last_img read more

Create an RNN based Python machine translation system Tutorial

first_imgMachine translation is a process which uses neural network techniques to automatically translate text from one language to the another, with no human intervention required. In today’s machine learning tutorial, we will understand the architecture and learn how to train and build your own machine translation system. This project will help us automatically translate German to produce English sentences. This article is an excerpt from a book written by Luca Massaron, Alberto Boschetti,  Alexey Grigorev, Abhishek Thakur, and Rajalingappaa Shanmugamani titled TensorFlow Deep Learning Projects. Walkthrough of the architecture A machine translation system receives as input an arbitrary string in one language and produces, as output, a string with the same meaning but in another language. Google Translate is one example (but also many other main IT companies have their own). There, users are able to translate to and from more than 100 languages. Using the webpage is easy: on the left just put the sentence you want to translate (for example, Hello World), select its language (in the example, it’s English), and select the language you want it to be translated to. Here’s an example where we translate the sentence Hello World to French: Is it easy? At a glance, we may think it’s a simple dictionary substitution. Words are chunked, the translation is looked up on the specific English-to-French dictionary, and each word is substituted with its translation. Unfortunately, that’s not the case. In the example, the English sentence has two words, while the French one has three. More generically, think about phrasal verbs (turn up, turn off, turn on, turn down), Saxon genitive, grammatical gender, tenses, conditional sentences… they don’t always have a direct translation, and the correct one should follow the context of the sentence. That’s why, for doing machine translation, we need some artificial intelligence tools. Specifically, as for many other natural language processing (NLP) tasks, we’ll be using recurrent neural networks (RNNs).  The main feature they have is that they work on sequences: given an input sequence, they produce an output sequence. The objective of this article is to create the correct training pipeline for having a sentence as the input sequence, and its translation as the output one. Remember also the no free lunch theorem: this process isn’t easy, and more solutions can be created with the same result. Here, in this article, we will propose a simple but powerful one. First of all, we start with the corpora: it’s maybe the hardest thing to find since it should contain a high fidelity translation of many sentences from a language to another one. Fortunately, NLTK, a well-known package of Python for NLP, contains the corpora Comtrans. Comtrans is the acronym of combination approach to machine translation and contains an aligned corpus for three languages: German, French, and English. In this project, we will use these corpora for a few reasons, as follows: It’s easy to download and import in Python. No preprocessing is needed to read it from disk / from the internet. NLTK already handles that part. It’s small enough to be used on many laptops (a few dozen thousands sentences). It’s freely available on the internet. For more information about the Comtrans project, go to http://www.fask.uni-mainz.de/user/rapp/comtrans/. More specifically, we will try to create a machine translation system to translate German to English. We picked these two languages at random among the ones available in the Comtrans corpora: feel free to flip them, or use the French corpora instead. The pipeline of our project is generic enough to handle any combination. Let’s now investigate how the corpora is organized by typing some commands: from nltk.corpus import comtransprint(comtrans.aligned_sents(‘alignment-de-en.txt’)[0]) The output is as follows: ‘Resumption of the se…’> The pairs of sentences are available using the function aligned_sents. The filename contains the from and to language. In this case, as for the following part of the project, we will translate German (de) to English (en). The returned object is an instance of the class nltk.translate.api.AlignedSent. By looking at the documentation, the first language is accessible with the attribute words, while the second language is accessible with the attribute mots. So, to extract the German sentence and its English translation separately, we should run: print(comtrans.aligned_sents()[0].words)print(comtrans.aligned_sents()[0].mots) The preceding code outputs: [‘Wiederaufnahme’, ‘der’, ‘Sitzungsperiode’][‘Resumption’, ‘of’, ‘the’, ‘session’] How nice! The sentences are already tokenized, and they look as sequences. In fact, they will be the input and (hopefully) the output of the RNN which will provide the service of machine translation from German to English for our project. Furthermore, if you want to understand the dynamics of the language, Comtrans makes available the alignment of the words in the translation: print(comtrans.aligned_sents()[0].alignment) The preceding code outputs: 0-0 1-1 1-2 2-3 The first word in German is translated to the first word in English (Wiederaufnahme to Resumption), the second to the second (der to both of and the), and the third (at index 1) is translated with the fourth (Sitzungsperiode to session). Pre-processing of the corpora The first step is to retrieve the corpora. We’ve already seen how to do this, but let’s now formalize it in a function. To make it generic enough, let’s enclose these functions in a file named corpora_tools.py. Let’s do some imports that we will use later on: import pickleimport refrom collections import Counterfrom nltk.corpus import comtrans Now, let’s create the function to retrieve the corpora: def retrieve_corpora(translated_sentences_l1_l2=’alignment-de-en.txt’): print(“Retrieving corpora: {}”.format(translated_sentences_l1_l2)) als = comtrans.aligned_sents(translated_sentences_l1_l2) sentences_l1 = [sent.words for sent in als] sentences_l2 = [sent.mots for sent in als] return sentences_l1, sentences_l2 This function has one argument; the file containing the aligned sentences from the NLTK Comtrans corpora. It returns two lists of sentences (actually, they’re a list of tokens), one for the source language (in our case, German), the other in the destination language (in our case, English). On a separate Python REPL, we can test this function: sen_l1, sen_l2 = retrieve_corpora()print(“# A sentence in the two languages DE & EN”)print(“DE:”, sen_l1[0])print(“EN:”, sen_l2[0])print(“# Corpora length (i.e. number of sentences)”)print(len(sen_l1))assert len(sen_l1) == len(sen_l2) The preceding code creates the following output: Retrieving corpora: alignment-de-en.txt# A sentence in the two languages DE & ENDE: [‘Wiederaufnahme’, ‘der’, ‘Sitzungsperiode’]EN: [‘Resumption’, ‘of’, ‘the’, ‘session’]# Corpora length (i.e. number of sentences)33334 We also printed the number of sentences in each corpora (33,000) and asserted that the number of sentences in the source and the destination languages is the same. In the following step, we want to clean up the tokens. Specifically, we want to tokenize punctuation and lowercase the tokens. To do so, we can create a new function in corpora_tools.py. We will use the regex module to perform the further splitting tokenization: def clean_sentence(sentence): regex_splitter = re.compile(“([!?.,:;$”‘)( ])”) clean_words = [re.split(regex_splitter, word.lower()) for word in sentence] return [w for words in clean_words for w in words if words if w] Again, in the REPL, let’s test the function: clean_sen_l1 = [clean_sentence(s) for s in sen_l1]clean_sen_l2 = [clean_sentence(s) for s in sen_l2]print(“# Same sentence as before, but chunked and cleaned”)print(“DE:”, clean_sen_l1[0])print(“EN:”, clean_sen_l2[0]) The preceding code outputs the same sentence as before, but chunked and cleaned: DE: [‘wiederaufnahme’, ‘der’, ‘sitzungsperiode’]EN: [‘resumption’, ‘of’, ‘the’, ‘session’] Nice! The next step for this project is filtering the sentences that are too long to be processed. Since our goal is to perform the processing on a local machine, we should limit ourselves to sentences up to N tokens. In this case, we set N=20, in order to be able to train the learner within 24 hours. If you have a powerful machine, feel free to increase that limit. To make the function generic enough, there’s also a lower bound with a default value set to 0, such as an empty token set. The logic of the function is very easy: if the number of tokens for a sentence or its translation is greater than N, then the sentence (in both languages) is removed: def filter_sentence_length(sentences_l1, sentences_l2, min_len=0, max_len=20): filtered_sentences_l1 = [] filtered_sentences_l2 = [] for i in range(len(sentences_l1)): if min_len Again, let’s see in the REPL how many sentences survived this filter. Remember, we started with more than 33,000:filt_clean_sen_l1, filt_clean_sen_l2 = filter_sentence_length(clean_sen_l1, clean_sen_l2)print(“# Filtered Corpora length (i.e. number of sentences)”)print(len(filt_clean_sen_l1))assert len(filt_clean_sen_l1) == len(filt_clean_sen_l2) The preceding code prints the following output: # Filtered Corpora length (i.e. number of sentences)14788 Almost 15,000 sentences survived, that is, half of the corpora. Now, we finally move from text to numbers (which AI mainly uses). To do so, we shall create a dictionary of the words for each language. The dictionary should be big enough to contain most of the words, though we can discard some if the language has words with low occourrence. This is a common practice even in the tf-idf (term frequency within a document, multiplied by the inverse of the document frequency, i.e. in how many documents that token appears), where very rare words are discarded to speed up the computation, and make the solution more scalable and generic. We need here four special symbols in both dictionaries: One symbol for padding (we’ll see later why we need it) One symbol for dividing the two sentences One symbol to indicate where the sentence stops One symbol to indicate unknown words (like the very rare ones) For doing so, let’s create a new file named data_utils.py containing the following lines of code: _PAD = “_PAD”_GO = “_GO”_EOS = “_EOS”_UNK = “_UNK”_START_VOCAB = [_PAD, _GO, _EOS, _UNK]PAD_ID = 0GO_ID = 1EOS_ID = 2UNK_ID = 3OP_DICT_IDS = [PAD_ID, GO_ID, EOS_ID, UNK_ID] Then, back to the corpora_tools.py file, let’s add the following function: import data_utilsdef create_indexed_dictionary(sentences, dict_size=10000, storage_path=None):count_words = Counter()dict_words = {}opt_dict_size = len(data_utils.OP_DICT_IDS)for sen in sentences:for word in sen:count_words[word] += 1dict_words[data_utils._PAD] = data_utils.PAD_IDdict_words[data_utils._GO] = data_utils.GO_IDdict_words[data_utils._EOS] = data_utils.EOS_IDdict_words[data_utils._UNK] = data_utils.UNK_IDfor idx, item in enumerate(count_words.most_common(dict_size)):dict_words[item[0]] = idx + opt_dict_sizeif storage_path:pickle.dump(dict_words, open(storage_path, “wb”))return dict_words This function takes as arguments the number of entries in the dictionary and the path of where to store the dictionary. Remember, the dictionary is created while training the algorithms: during the testing phase it’s loaded, and the association token/symbol should be the same one as used in the training. If the number of unique tokens is greater than the value set, only the most popular ones are selected. At the end, the dictionary contains the association between a token and its ID for each language. After building the dictionary, we should look up the tokens and substitute them with their token ID. For that, we need another function: def sentences_to_indexes(sentences, indexed_dictionary): indexed_sentences = [] not_found_counter = 0 for sent in sentences: idx_sent = [] for word in sent: try: idx_sent.append(indexed_dictionary[word]) except KeyError: idx_sent.append(data_utils.UNK_ID) not_found_counter += 1 indexed_sentences.append(idx_sent)print(‘[sentences_to_indexes] Did not find {} words’.format(not_found_counter))return indexed_sentences This step is very simple; the token is substituted with its ID. If the token is not in the dictionary, the ID of the unknown token is used. Let’s see in the REPL how our sentences look after these steps: dict_l1 = create_indexed_dictionary(filt_clean_sen_l1, dict_size=15000, storage_path=”/tmp/l1_dict.p”)dict_l2 = create_indexed_dictionary(filt_clean_sen_l2, dict_size=10000, storage_path=”/tmp/l2_dict.p”)idx_sentences_l1 = sentences_to_indexes(filt_clean_sen_l1, dict_l1)idx_sentences_l2 = sentences_to_indexes(filt_clean_sen_l2, dict_l2)print(“# Same sentences as before, with their dictionary ID”)print(“DE:”, list(zip(filt_clean_sen_l1[0], idx_sentences_l1[0]))) This code prints the token and its ID for both the sentences. What’s used in the RNN will be just the second element of each tuple, that is, the integer ID: # Same sentences as before, with their dictionary IDDE: [(‘wiederaufnahme’, 1616), (‘der’, 7), (‘sitzungsperiode’, 618)]EN: [(‘resumption’, 1779), (‘of’, 8), (‘the’, 5), (‘session’, 549)] Please also note how frequent tokens, such as the and of in English, and der in German, have a low ID. That’s because the IDs are sorted by popularity (see the body of the function create_indexed_dictionary). Even though we did the filtering to limit the maximum size of the sentences, we should create a function to extract the maximum size. For the lucky owners of very powerful machines, which didn’t do any filtering, that’s the moment to see how long the longest sentence in the RNN will be. That’s simply the function: def extract_max_length(corpora): return max([len(sentence) for sentence in corpora]) Let’s apply the following to our sentences: max_length_l1 = extract_max_length(idx_sentences_l1)max_length_l2 = extract_max_length(idx_sentences_l2)print(“# Max sentence sizes:”)print(“DE:”, max_length_l1)print(“EN:”, max_length_l2) As expected, the output is: # Max sentence sizes:DE: 20EN: 20 The final preprocessing step is padding. We need all the sequences to be the same length, therefore we should pad the shorter ones. Also, we need to insert the correct tokens to instruct the RNN where the string begins and ends. Basically, this step should: Pad the input sequences, for all being 20 symbols long Pad the output sequence, to be 20 symbols long Insert an _GO at the beginning of the output sequence and an _EOS at the end to position the start and the end of the translation This is done by this function (insert it in the corpora_tools.py): def prepare_sentences(sentences_l1, sentences_l2, len_l1, len_l2): assert len(sentences_l1) == len(sentences_l2) data_set = [] for i in range(len(sentences_l1)): padding_l1 = len_l1 – len(sentences_l1[i]) pad_sentence_l1 = ([data_utils.PAD_ID]*padding_l1) + sentences_l1[i] padding_l2 = len_l2 – len(sentences_l2[i]) pad_sentence_l2 = [data_utils.GO_ID] + sentences_l2[i] + [data_utils.EOS_ID] + ([data_utils.PAD_ID] * padding_l2) data_set.append([pad_sentence_l1, pad_sentence_l2]) return data_set To test it, let’s prepare the dataset and print the first sentence: data_set = prepare_sentences(idx_sentences_l1, idx_sentences_l2, max_length_l1, max_length_l2)print(“# Prepared minibatch with paddings and extra stuff”)print(“DE:”, data_set[0][0])print(“EN:”, data_set[0][1])print(“# The sentence pass from X to Y tokens”)print(“DE:”, len(idx_sentences_l1[0]), “->”, len(data_set[0][0]))print(“EN:”, len(idx_sentences_l2[0]), “->”, len(data_set[0][1])) The preceding code outputs the following: # Prepared minibatch with paddings and extra stuffDE: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1616, 7, 618]EN: [1, 1779, 8, 5, 549, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]# The sentence pass from X to Y tokensDE: 3 -> 20EN: 4 -> 22 As you can see, the input and the output are padded with zeros to have a constant length (in the dictionary, they correspond to _PAD, see data_utils.py), and the output contains the markers 1 and 2 just before the start and the end of the sentence. As proven effective in the literature, we’re going to pad the input sentences at the start and the output sentences at the end. After this operation, all the input sentences are 20 items long, and the output sentences 22. Training the machine translator So far, we’ve seen the steps to preprocess the corpora, but not the model used. The model is actually already available on the TensorFlow Models repository, freely downloadable from https://github.com/tensorflow/models/blob/master/tutorials/rnn/translate/seq2seq_model.py. The piece of code is licensed with Apache 2.0. We really thank the authors for having open sourced such a great model. Copyright 2015 The TensorFlow Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the License); You may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software. Distributed under the License is distributed on an AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. We will see the usage of the model throughout this section. First, let’s create a new file named train_translator.py and put in some imports and some constants. We will save the dictionary in the /tmp/ directory, as well as the model and its checkpoints: import timeimport mathimport sysimport pickleimport globimport osimport tensorflow as tffrom seq2seq_model import Seq2SeqModelfrom corpora_tools import *path_l1_dict = “/tmp/l1_dict.p”path_l2_dict = “/tmp/l2_dict.p”model_dir = “/tmp/translate “model_checkpoints = model_dir + “/translate.ckpt” Now, let’s use all the tools created in the previous section within a function that, given a Boolean flag, returns the corpora. More specifically, if the argument is False, it builds the dictionary from scratch (and saves it); otherwise, it uses the dictionary available in the path: def build_dataset(use_stored_dictionary=False): sen_l1, sen_l2 = retrieve_corpora() clean_sen_l1 = [clean_sentence(s) for s in sen_l1] clean_sen_l2 = [clean_sentence(s) for s in sen_l2] filt_clean_sen_l1, filt_clean_sen_l2 = filter_sentence_length(clean_sen_l1, clean_sen_l2)if not use_stored_dictionary:dict_l1 = create_indexed_dictionary(filt_clean_sen_l1, dict_size=15000, storage_path=path_l1_dict)dict_l2 = create_indexed_dictionary(filt_clean_sen_l2, dict_size=10000, storage_path=path_l2_dict)else:dict_l1 = pickle.load(open(path_l1_dict, “rb”))dict_l2 = pickle.load(open(path_l2_dict, “rb”))dict_l1_length = len(dict_l1)dict_l2_length = len(dict_l2)idx_sentences_l1 = sentences_to_indexes(filt_clean_sen_l1, dict_l1)idx_sentences_l2 = sentences_to_indexes(filt_clean_sen_l2, dict_l2)max_length_l1 = extract_max_length(idx_sentences_l1)max_length_l2 = extract_max_length(idx_sentences_l2)data_set = prepare_sentences(idx_sentences_l1, idx_sentences_l2, max_length_l1, max_length_l2)return (filt_clean_sen_l1, filt_clean_sen_l2), data_set, (max_length_l1, max_length_l2), (dict_l1_length, dict_l2_length) This function returns the cleaned sentences, the dataset, the maximum length of the sentences, and the lengths of the dictionaries. Also, we need to have a function to clean up the model. Every time we run the training routine we need to clean up the model directory, as we haven’t provided any garbage information. We can do this with a very simple function: def cleanup_checkpoints(model_dir, model_checkpoints): for f in glob.glob(model_checkpoints + “*”): os.remove(f) try: os.mkdir(model_dir) except FileExistsError: pass Finally, let’s create the model in a reusable fashion: def get_seq2seq_model(session, forward_only, dict_lengths, max_sentence_lengths, model_dir): model = Seq2SeqModel( source_vocab_size=dict_lengths[0], target_vocab_size=dict_lengths[1], buckets=[max_sentence_lengths], size=256, num_layers=2, max_gradient_norm=5.0, batch_size=64, learning_rate=0.5, learning_rate_decay_factor=0.99, forward_only=forward_only, dtype=tf.float16) ckpt = tf.train.get_checkpoint_state(model_dir) if ckpt and tf.train.checkpoint_exists(ckpt.model_checkpoint_path): print(“Reading model parameters from {}”.format(ckpt.model_checkpoint_path)) model.saver.restore(session, ckpt.model_checkpoint_path) else: print(“Created model with fresh parameters.”) session.run(tf.global_variables_initializer()) return model This function calls the constructor of the model, passing the following parameters: The source vocabulary size (German, in our example) The target vocabulary size (English, in our example) The buckets (in our example is just one, since we padded all the sequences to a single size) The long short-term memory (LSTM) internal units size The number of stacked LSTM layers The maximum norm of the gradient (for gradient clipping) The mini-batch size (that is, how many observations for each training step) The learning rate The learning rate decay factor The direction of the model The type of data (in our example, we will use flat16, that is, float using 2 bytes) To make the training faster and obtain a model with good performance, we have already set the values in the code; feel free to change them and see how it performs. The final if/else in the function retrieves the model, from its checkpoint, if the model already exists. In fact, this function will be used in the decoder too to retrieve and model on the test set. Finally, we have reached the function to train the machine translator. Here it is: def train(): with tf.Session() as sess: model = get_seq2seq_model(sess, False, dict_lengths, max_sentence_lengths, model_dir) # This is the training loop. step_time, loss = 0.0, 0.0 current_step = 0 bucket = 0 steps_per_checkpoint = 100 max_steps = 20000 while current_step The function starts by creating the model. Also, it sets some constants on the steps per checkpoints and the maximum number of steps. Specifically, in the code, we will save a model every 100 steps and we will perform no more than 20,000 steps. If it still takes too long, feel free to kill the program: every checkpoint contains a trained model, and the decoder will use the most updated one.At this point, we enter the while loop. For each step, we ask the model to get a minibatch of data (of size 64, as set previously). The method get_batch returns the inputs (that is, the source sequence), the outputs (that is, the destination sequence), and the weights of the model. With the method step, we run one step of the training. One piece of information returned is the loss for the current minibatch of data. That’s all the training!To report the performance and store the model every 100 steps, we print the average perplexity of the model (the lower, the better) on the 100 previous steps, and we save the checkpoint. The perplexity is a metric connected to the uncertainty of the predictions: the more confident we’re about the tokens, the lower will be the perplexity of the output sentence. Also, we reset the counters and we extract the same metric from a single minibatch of the test set (in this case, it’s a random minibatch of the dataset), and performances of it are printed too. Then, the training process restarts again.As an improvement, every 100 steps we also reduce the learning rate by a factor. In this case, we multiply it by 0.99. This helps the convergence and the stability of the training.We now have to connect all the functions together. In order to create a script that can be called by the command line but is also used by other scripts to import functions, we can create a main, as follows:if __name__ == “__main__”: _, data_set, max_sentence_lengths, dict_lengths = build_dataset(False) cleanup_checkpoints(model_dir, model_checkpoints) train() In the console, you can now train your machine translator system with a very simple command: $> python train_translator.py On an average laptop, without an NVIDIA GPU, it takes more than a day to reach a perplexity below 10 (12+ hours). This is the output: Retrieving corpora: alignment-de-en.txt[sentences_to_indexes] Did not find 1097 words[sentences_to_indexes] Did not find 0 wordsCreated model with fresh parameters.global step 100 learning rate 0.5 step-time 4.3573073434829713 perplexity 526.6638556683066eval: perplexity 159.2240770935855[…]global step 10500 learning rate 0.180419921875 step-time 4.35106209993362414 perplexity 2.0458043055629487eval: perplexity 1.8646006006241982[…] In this article, we’ve seen how to create a machine translation system based on an RNN. We’ve seen how to organize the corpus, and how to train it. To know more about how to test and translate the model, do checkout this book TensorFlow Deep Learning Projects. Read next Google’s translation tool is now offline – and more powerful than ever thanks to AI Anatomy of an automated machine learning algorithm (AutoML) FAE (Fast Adaptation Engine): iOlite’s tool to write Smart Contracts using machine translationlast_img read more