Connect with us

News

CDC Adds New Destinations to Highest Travel Warning Level, Including Bermuda and Antigua

Published

on

CDC Adds New Destinations to Highest Travel Warning Level, Including Bermuda and Antigua
Bermuda is among the three new destinations upgraded to Level 4 status.

The U.S Centers for Disease Control and Prevention has added three new destinations to its highest Level 4 COVID-19 travel warning list, including Bermuda and the Caribbean isle of Antigua.

Travel restrictions and guidelines are rapidly changing amid the coronavirus pandemic and the spread of the highly contagious Delta variant; the CDC updates its travel guidance weekly. The latest advisory has upgraded Bermuda, Antigua and Barbuda as well as Guyana to the “Very High” Level 4 category.

Subscribe to Observer’s Lifestyle Newsletter

A destination is considered to have a Level 4 “Very High” level of transmission if 500 or more new cases of COVID-19 are recorded per 100,000 people over a 28-day period. Bermuda, Antigua and Barbuda and Guyana were all previously assigned Level 3 “High” status, which is given to locales that have had between 100 and 500 cases per 100,000 residents within 28 days.

The CDC has raised a number of Caribbean islands to the Level 4 category over the past several weeks, including Grenada, St. Kitts and Nevis, St. Martin, St. Barthelemy and the Bahamas.

The CDC recommends Americans avoid travel to any destinations with Level 4 status, and that if you absolutely must travel to one of these locales, to make sure you are fully vaccinated beforehand. At the moment, the Centers for Disease Control and Prevention advises unvaccinated Americans to avoid all international travel.

US Will Ease Restrictions for Fully Vaccinated Foreign Travelers in
The U.S. will lift restrictions on vaccinated foreign travelers starting in November.

The CDC’s latest travel guidance comes after news from the White House that the U.S. will ease travel restrictions on vaccinated foreign visitors, starting in November. The new policy will require foreign travelers to provide proof of vaccination as well as a negative COVID-19 test from within three days before traveling to the United States, while unvaccinated Americans will need to test negative for COVID-19 one day prior to travel in addition to testing negative again upon arrival. The CDC is also expected to issue an order that will require airlines to collection travelers’ phone numbers and email addresses, for a new contact tracing system.

CDC Adds New Destinations to Highest Travel Warning Level, Including Bermuda and Antigua

google news

News

Biden hosting budget talks in Delaware with Schumer, Manchin

Published

on

Biden hosting budget talks in Delaware with Schumer, Manchin

By ALAN FRAM

WASHINGTON (AP) — President Joe Biden was hosting two pivotal senators for meetings in Delaware on Sunday in hopes of resolving lingering disputes over Democrats’ long-stalled effort to craft an expansive social and environment measure.

Senate Majority Leader Chuck Schumer, D-N.Y., and Sen. Joe Manchin, D-W.Va., were scheduled to attend the session, the White House said.

Manchin and Sen. Kyrsten Sinema, D-Ariz., two of their party’s most moderate members, have insisted on reducing the size of the package and have pressed for other changes.

Democrats initially planned that the measure would contain $3.5 trillion worth of spending and tax initiatives over 10 years. But demands by moderates led by Manchin and Sinema to contain costs mean its final price tag could well be less than $2 trillion.

Disputes remain over whether some priorities must be cut or excluded. These include plans to expand Medicare coverage, child care assistance and helping lower-income college students. Manchin, whose state has a major coal industry, has opposed proposals to penalize utilities that do not switch quickly to clean energy.

The White House and congressional leaders have tried to push monthslong negotiations toward a conclusion by the end of October. Democrats’ aim is to produce an outline by then that would spell out the overall size of the measure and describe policy goals that leaders as well as progressives and moderates would endorse.

The wide-ranging measure carries many of Biden’s top domestic priorities. Party leaders want to end internal battles, avert the risk that the effort could fail and focus voters’ attention on the plan’s popular programs for helping families with child care, health costs and other issues.

Democrats also want Biden to be able to cite accomplishments when he attends a global summit in Scotland on climate change in early November. They also have wanted to make progress that could help Democrat Terry McAuliffe win a neck-and-neck Nov. 2 gubernatorial election in Virginia.

The hope is that an agreement between the party’s two factions would create enough trust to let Democrats finally push through the House a separate $1 trillion package of highway and broadband projects.

That bipartisan measure was approved over the summer by the Senate. But progressives have held it up in the House as leverage to prompt moderates to back the bigger, broader package of health care, education and environment initiatives.

google news
Continue Reading

News

Why workers quit? Blame the stingy boss!

Published

on

Why workers quit? Blame the stingy boss!

With apologies to country songwriter David Allan Coe, the 2021 job market’s theme song is “Take This Job and Quit It.”

In September, 4.3 million U.S. workers quit their jobs, according to the Bureau of Labor Statistics, the highest number on record and evidence of the public’s broad rethinking of employment and whether it’s a worthwhile endeavor.

Why is the “I’m outta here” movement such a hot workplace trend? The easiest way to get a better raise these days is to switch jobs.

This unfortunate career tactic is bolstered by my trusty spreadsheet’s review of detailed wage stats from the Federal Reserve Bank of Atlanta.

Job switchers — those changing employers or job duties or going to a different occupation or industry — got a median 5.4% annual wage increase during the three months ended in September.

Now, compare that with folks keeping their jobs, who only saw their wages go up 3.5%. Or overall U.S. wage growth at 4.2%.

This is the largest gap between raises for “switchers” and “stayers” in 23 years. Talk about a incentive to quit.

So perhaps bosses should ask themselves if they’re part of the problem.

Hunt for raises

Workplace analysts, policymakers and business leaders have debated the motivations behind all the quitting.

Suggested factors range from fear of catching coronavirus on the job to plenty of openings to choose from and a lack of childcare for younger members of the workforce. The seemingly illogical tactic of bosses paying up for a new person vs. giving existing staff more cash has to be part of the discussion.

Stats show employers became incredibly stingy with salary raises during and after the Great Recession.

Let’s look at an odd workplace stat tracked by the Atlanta Fed: workers who got no raise at all. In the 2010 decade, wages were stagnant for 15% of the workforce. That was up from the 2000s when only 12.3% got no salary bump.

Then came the pandemic’s economic volatility, and surprisingly, workers were again valuable: The share of “no raises” fell to 13.4% by August.

We’re witnessing another chapter in the evolving give-and-take between boss and worker.

Before the pandemic, career stability and workplace culture — rather than pay — felt like the most-desired traits. Workers focused on higher pay were often forced to job hunt while bosses got their stable flock ping-pong tables and gourmet coffee machines.

Today, it seems like it’s all about the money. Let’s look at the varying size of the financial carrot offered to those claiming a new job.

From 1998 to 2007, the bubble-fueled boom years, job switchers got 4.9% raises vs. 4.1% for those who didn’t. That’s a 0.8 percentage-point reason to change jobs.

When those good times turned extra sour — the Great Recession era of 2008 to 2012 — the clout of job switchers diminished with 3% raises barely ahead of 2.9% for “stayers.”

Then came the 2013-19 economic rebound and the pay-hike edge returned for switchers: 3.3% raises vs. 2.6% for stayers — a 0.7 point gap.

And these premium raises only grew in the pandemic era: Switchers averaged 4.2% raises since March 2020 vs. 3.2% — a full-point gap.

No uniform pay

So, who’s getting the better raises?

This summer only two job-market slices offered larger raises than job switching, according to my spreadsheet’s analysis of 32 worker characteristics tracked by the Atlanta Fed using 12-month moving averages.

You’d either have to be among the youngest workers — ages 16 to 24, whose typical wages jumped 9.5% in a year — or be among the lowest-paid workers, who got 4.8% raises.

Workers in hard-to-fill, entry-level or poorly-paying positions got nearly the same pay hikes as quitters. Pay for leisure and hospitality industries rose 3.9% in a year while workers without college degrees and those in “low skill” positions got 3.8% pay hikes.

And only the two groups got smaller raises than people who stayed with their employer. The highest-paid workers got just 2.8% raises while the oldest workers, age 55 or higher, got only 1.9%.

Bosses are learning that workers know pay jumps if you jump ship, especially for low-wage positions. And in 2021, it’s all about the paycheck.

Quitting is the new labor movement.

Jonathan Lansner is business columnist for the Southern California News Group. He can be reached at [email protected]

google news
Continue Reading

News

Facebook dithered in curbing divisive user content in India

Published

on

Facebook dithered in curbing divisive user content in India

NEW DELHI, India (AP) — Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts, particularly anti-Muslim content, according to leaked documents obtained by The Associated Press, even as its own employees cast doubt over the company’s motivations and interests.

From research as recent as March of this year to company memos that date back to 2019, the internal company documents on India highlight Facebook’s constant struggles in quashing abusive content on its platforms in the world’s biggest democracy and the company’s largest growth market. Communal and religious tensions in India have a history of boiling over on social media and stoking violence.

The files show that Facebook has been aware of the problems for years, raising questions over whether it has done enough to address these issues. Many critics and digital experts say it has failed to do so, especially in cases where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party, the BJP, are involved.

Across the world, Facebook has become increasingly important in politics, and India is no different.

Modi has been credited for leveraging the platform to his party’s advantage during elections, and reporting from The Wall Street Journal last year cast doubt over whether Facebook was selectively enforcing its policies on hate speech to avoid blowback from the BJP. Both Modi and Facebook chairman and CEO Mark Zuckerberg have exuded bonhomie, memorialized by a 2015 image of the two hugging at the Facebook headquarters.

The leaked documents include a trove of internal company reports on hate speech and misinformation in India. In some cases, much of it was intensified by its own “recommended” feature and algorithms. But they also include the company staffers’ concerns over the mishandling of these issues and their discontent expressed about the viral “malcontent” on the platform.

According to the documents, Facebook saw India as one of the most “at risk countries” in the world and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech.” Yet, Facebook didn’t have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.

In a statement to the AP, Facebook said it has “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali” which has resulted in “reduced amount of hate speech that people see by half” in 2021.

“Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a company spokesperson said.

This AP story, along with others being published, is based on disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions were obtained by a consortium of news organizations, including the AP.

Back in February 2019 and ahead of a general election when concerns of misinformation were running high, a Facebook employee wanted to understand what a new user in the country saw on their news feed if all they did was follow pages and groups solely recommended by the platform itself.

The employee created a test user account and kept it live for three weeks, a period during which an extraordinary event shook India — a militant attack in disputed Kashmir had killed over 40 Indian soldiers, bringing the country to near war with rival Pakistan.

In the note, titled “An Indian Test User’s Descent into a Sea of Polarizing, Nationalistic Messages,” the employee whose name is redacted said they were “shocked” by the content flooding the news feed which “has become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.”

Seemingly benign and innocuous groups recommended by Facebook quickly morphed into something else altogether, where hate speech, unverified rumors and viral content ran rampant.

The recommended groups were inundated with fake news, anti-Pakistan rhetoric and Islamophobic content. Much of the content was extremely graphic.

One included a man holding the bloodied head of another man covered in a Pakistani flag, with an Indian flag in the place of his head. Its “Popular Across Facebook” feature showed a slew of unverified content related to the retaliatory Indian strikes into Pakistan after the bombings, including an image of a napalm bomb from a video game clip debunked by one of Facebook’s fact-check partners.

“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote.

It sparked deep concerns over what such divisive content could lead to in the real world, where local news at the time were reporting on Kashmiris being attacked in the fallout.

“Should we as a company have an extra responsibility for preventing integrity harms that result from recommended content?” the researcher asked in their conclusion.

The memo, circulated with other employees, did not answer that question. But it did expose how the platform’s own algorithms or default settings played a part in spurring such malcontent. The employee noted that there were clear “blind spots,” particularly in “local language content.” They said they hoped these findings would start conversations on how to avoid such “integrity harms,” especially for those who “differ significantly” from the typical U.S. user.

Even though the research was conducted during three weeks that weren’t an average representation, they acknowledged that it did show how such “unmoderated” and problematic content “could totally take over” during “a major crisis event.”

The Facebook spokesperson said the test study “inspired deeper, more rigorous analysis” of its recommendation systems and “contributed to product changes to improve them.”

“Separately, our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,” the spokesperson said.

Other research files on misinformation in India highlight just how massive a problem it is for the platform.

In January 2019, a month before the test user experiment, another assessment raised similar alarms about misleading content. In a presentation circulated to employees, the findings concluded that Facebook’s misinformation tags weren’t clear enough for users, underscoring that it needed to do more to stem hate speech and fake news. Users told researchers that “clearly labeling information would make their lives easier.”

Again, it was noted that the platform didn’t have enough local language fact-checkers, which meant a lot of content went unverified.

Alongside misinformation, the leaked documents reveal another problem plaguing Facebook in India: anti-Muslim propaganda, especially by Hindu-hardline groups.

India is Facebook’s largest market with over 340 million users — nearly 400 million Indians also use the company’s messaging service WhatsApp. But both have been accused of being vehicles to spread hate speech and fake news against minorities.

In February 2020, these tensions came to life on Facebook when a politician from Modi’s party uploaded a video on the platform in which he called on his supporters to remove mostly Muslim protesters from a road in New Delhi if the police didn’t. Violent riots erupted within hours, killing 53 people. Most of them were Muslims. Only after thousands of views and shares did Facebook remove the video.

In April, misinformation targeting Muslims again went viral on its platform as the hashtag “Coronajihad” flooded news feeds, blaming the community for a surge in COVID-19 cases. The hashtag was popular on Facebook for days but was later removed by the company.

For Mohammad Abbas, a 54-year-old Muslim preacher in New Delhi, those messages were alarming.

Some video clips and posts purportedly showed Muslims spitting on authorities and hospital staff. They were quickly proven to be fake, but by then India’s communal fault lines, still stressed by deadly riots a month earlier, were again split wide open.

The misinformation triggered a wave of violence, business boycotts and hate speech toward Muslims. Thousands from the community, including Abbas, were confined to institutional quarantine for weeks across the country. Some were even sent to jails, only to be later exonerated by courts.

“People shared fake videos on Facebook claiming Muslims spread the virus. What started as lies on Facebook became truth for millions of people,” Abbas said.

Criticisms of Facebook’s handling of such content were amplified in August of last year when The Wall Street Journal published a series of stories detailing how the company had internally debated whether to classify a Hindu hard-line lawmaker close to Modi’s party as a “dangerous individual” — a classification that would ban him from the platform — after a series of anti-Muslim posts from his account.

The documents reveal the leadership dithered on the decision, prompting concerns by some employees, of whom one wrote that Facebook was only designating non-Hindu extremist organizations as “dangerous.”

The documents also show how the company’s South Asia policy head herself had shared what many felt were Islamophobic posts on her personal Facebook profile. At the time, she had also argued that classifying the politician as dangerous would hurt Facebook’s prospects in India.

The author of a December 2020 internal document on the influence of powerful political actors on Facebook policy decisions notes that “Facebook routinely makes exceptions for powerful actors when enforcing content policy.” The document also cites a former Facebook chief security officer saying that outside of the U.S., “local policy heads are generally pulled from the ruling political party and are rarely drawn from disadvantaged ethnic groups, religious creeds or casts” which “naturally bends decision-making towards the powerful.”

Months later the India official quit Facebook. The company also removed the politician from the platform, but documents show many company employees felt the platform had mishandled the situation, accusing it of selective bias to avoid being in the crosshairs of the Indian government.

“Several Muslim colleagues have been deeply disturbed/hurt by some of the language used in posts from the Indian policy leadership on their personal FB profile,” an employee wrote.

Another wrote that “barbarism” was being allowed to “flourish on our network.”

It’s a problem that has continued for Facebook, according to the leaked files.

As recently as March this year, the company was internally debating whether it could control the “fear mongering, anti-Muslim narratives” pushed by Rashtriya Swayamsevak Sangh, a far-right Hindu nationalist group which Modi is also a part of, on its platform.

In one document titled “Lotus Mahal,” the company noted that members with links to the BJP had created multiple Facebook accounts to amplify anti-Muslim content, ranging from “calls to oust Muslim populations from India” and “Love Jihad,” an unproven conspiracy theory by Hindu hard-liners who accuse Muslim men of using interfaith marriages to coerce Hindu women to change their religion.

The research found that much of this content was “never flagged or actioned” since Facebook lacked “classifiers” and “moderators” in Hindi and Bengali languages. Facebook said it added hate speech classifiers in Hindi starting in 2018 and introduced Bengali in 2020.

The employees also wrote that Facebook hadn’t yet “put forth a nomination for designation of this group given political sensitivities.”

The company said its designations process includes a review of each case by relevant teams across the company and are agnostic to region, ideology or religion and focus instead on indicators of violence and hate. It did not, however, reveal whether the Hindu nationalist group had since been designated as “dangerous.”

___

Associated Press writer Sam McNeil in Beijing contributed to this report.

___

See full coverage of the “Facebook Papers” here: https://apnews.com/hub/the-facebook-papers

google news
Continue Reading

Trending