Skip to content

Everyone worries and talks about Lyme disease on the east coast of the U.S., but it appears that they should be worrying about multiple infections (including Lyme disease) when bitten by a tick. From Science Daily:

Single tick bite can pack double pathogen punch

People who get bitten by a blacklegged tick have a higher-than-expected chance of being exposed to more than one pathogen at the same time.

"We found that ticks are almost twice as likely to be infected with two pathogens -- the bacterium that causes Lyme disease and the protozoan that causes babesiosis -- than we would have expected," said Felicia Keesing, a professor of biology at Bard College, Adjunct Scientist at the Cary Institute, and co-author of the paper. "That means health care providers and the public need to be particularly alert to the possibility of multiple infections coming from the same tick bite."

Almost 30 percent of the ticks were infected with the agent of Lyme disease. One-third of these were also infected with at least one other pathogen. The agents of Lyme disease and babesiosis were found together in 7 percent of ticks.

"Mice and chipmunks are critical reservoirs for these two pathogens, so ticks that have fed on these animals are much more likely to be co-infected," ...

Not only was co-infection with the agents of Lyme disease and babesiosis greater than expected, but rates of triple infection with the agents of Lyme, babesiosis, and anaplasmosis were about twice as likely as expected. "People in tick-infested parts of the United States such as the Northeast, Mid-Atlantic, and Upper Midwest, are vulnerable to being exposed to two or three diseases from a single tick bite," said Keesing. "And, of course, that risk increases when they're bitten by more than one tick."

For those who missed it. An amusing and informative personal story (Julia Scott) about trying to cultivate a healthy skin biome. Well worth reading. Excerpts from the May 22, 2014 NY Times:

My No-Soap, No-Shampoo, Bacteria-Rich Hygiene Experiment

For most of my life, if I’ve thought at all about the bacteria living on my skin, it has been while trying to scrub them away. But recently I spent four weeks rubbing them in. I was Subject 26 in testing a living bacterial skin tonic, developed by AOBiome, a biotech start-up in Cambridge, Mass. The tonic looks, feels and tastes like water, but each spray bottle of AO+ Refreshing Cosmetic Mist contains billions of cultivated Nitrosomonas eutropha, an ammonia-oxidizing bacteria (AOB) that is most commonly found in dirt and untreated water. AOBiome scientists hypothesize that it once lived happily on us too — before we started washing it away with soap and shampoo — acting as a built-in cleanser, deodorant, anti-inflammatory and immune booster by feeding on the ammonia in our sweat and converting it into nitrite and nitric oxide.

 Because the N. eutropha are alive, he said, they would need to be kept cold to remain stable. I would be required to mist my face, scalp and body with bacteria twice a day. I would be swabbed every week at a lab, and the samples would be analyzed to detect changes in my invisible microbial community.

While most microbiome studies have focused on the health implications of what’s found deep in the gut, companies like AOBiome are interested in how we can manipulate the hidden universe of organisms (bacteria, viruses and fungi) teeming throughout our glands, hair follicles and epidermis. They see long-term medical possibilities in the idea of adding skin bacteria instead of vanquishing them with antibacterials — the potential to change how we diagnose and treat serious skin ailments. 

For my part in the AO+ study, I wanted to see what the bacteria could do quickly, and I wanted to cut down on variables, so I decided to sacrifice my own soaps, shampoo and deodorant while participating. I was determined to grow a garden of my own. Some skin bacteria species double every 20 minutes; ammonia-oxidizing bacteria are much slower, doubling only every 10 hoursAnd now the bacteria were on my skin.

I had warned my friends and co-workers about my experiment, and while there were plenty of jokes — someone left a stick of deodorant on my desk; people started referring to me as “Teen Spirit” — when I pressed them to sniff me after a few soap-free days, no one could detect a difference. Aside from my increasingly greasy hair, the real changes were invisible. By the end of the week, Jamas was happy to see test results that showed the N. eutropha had begun to settle in, finding a friendly niche within my biome.

AOBiome is not the first company to try to leverage emerging discoveries about the skin microbiome into topical products. The skin-care aisle at my drugstore had a moisturizer with a “probiotic complex,” which contains an extract of Lactobacillus, species unknown. There is even a “frozen yogurt” body cleanser whose second ingredient is sodium lauryl sulfate, a potent detergent, so you can remove your healthy bacteria just as fast as you can grow them.

Although a few studies have shown that Lactobacillus may reduce symptoms of eczema when taken orally, it does not live on the skin with any abundance, making it “a curious place to start for a skin probiotic,” said Michael Fischbach, a microbiologist at the University of California, San Francisco. Extracts are not alive, so they won’t be colonizing anything.

It doesn’t help that the F.D.A. has no regulatory definition for “probiotic” and has never approved such a product for therapeutic use. “The skin microbiome is the wild frontier,” Fischbach told me. “We know very little about what goes wrong when things go wrong and whether fixing the bacterial community is going to fix any real problems.”

I asked AOBiome which of my products was the biggest threat to the “good” bacteria on my skin. The answer was equivocal: Sodium lauryl sulfate, the first ingredient in many shampoos, may be the deadliest to N. eutropha, but nearly all common liquid cleansers remove at least some of the bacteria. Antibacterial soaps are most likely the worst culprits, but even soaps made with only vegetable oils or animal fats strip the skin of AOB.

The reduced risk of stroke was from only a modest 20 grams of protein per day. From Science Daily:

Diet higher in protein may be linked to lower risk of stroke

People with diets higher in protein, especially from fish, may be less likely to have a stroke than those with diets lower in protein, according to a meta-analysis. The meta-analysis looked at all of the available research on the relationship between protein in the diet and the risk of stroke. Seven studies with a total of 254,489 participants who were followed for an average of 14 years were included in the analysis.

"The amount of protein that led to the reduced risk was moderate -- equal to 20 grams per day," said study author Xinfeng Liu, MD, PhD, of Nanjing University School of Medicine in Nanjing, China. 

Overall, the participants with the highest amount of protein in their diets were 20 percent less likely to develop a stroke than those with the lowest amount of protein in their diets. The results accounted for other factors that could affect the risk of stroke, such as smoking and high cholesterol. For every additional 20 grams per day of protein that people ate, their risk of stroke decreased by 26 percent.

Liu noted that the analysis does not support increased consumption of red meat, which has been associated with increased stroke risk. Two of the studies were conducted in Japan, where people eat less red meat than westerners do and more fish, which has been associated with decreased risk of stroke. 

The reduced risk of stroke was stronger for animal protein than vegetable protein. Protein has the effect of lowering blood pressure, which may play a role in reducing stroke risk, Liu said.

Daily sitting for hours on end is no damn good. From Medical Daily:

Too Much Sitting And Watching TV Increases Your Risk Of Certain Cancers: Why Sitting Is The New Smoking

Or the nice scientific write-up of the same study. Bottom line: to lower the risk of cancer, sit less and move more. From Science Daily:

Sedentary behavior increases risk of certain cancers

Physical inactivity has been linked with diabetes, obesity, and cardiovascular disease, but it can also increase the risk of certain cancers, according to a study published June 16 in the JNCI: Journal of the National Cancer Institute.

To assess the relationship between TV viewing time, recreational sitting time, occupational sitting time, and total sitting time with the risk of various cancers, Daniela Schmid, Ph.D., M.Sc., and Michael F. Leitzmann, M.D., Dr.P.H., of the Department of Epidemiology and Preventive Medicine, University of Regensburg, Germany, conducted a meta-analysis of 43 observational studies, including over 4 million individuals and 68,936 cancer cases

When the highest levels of sedentary behavior were compared to the lowest, the researchers found a statistically significantly higher risk for three types of cancer -- colon, endometrial, and lung. Moreover, the risk increased with each 2-hour increase in sitting time, 8% for colon cancer, 10% for endometrial cancer, and 6% for lung cancer, although the last was borderline statistically significant. The effect also seemed to be independent of physical activity, suggesting that large amounts of time spent sitting can still be detrimental to those who are otherwise physically active. TV viewing time showed the strongest relationship with colon and endometrial cancer, possibly, the authors write, because TV watching is often associated with drinking sweetened beverages, and eating junk foods.

The researchers write "That sedentariness has a detrimental impact on cancer even among physically active persons implies that limiting the time spent sedentary may play an important role in preventing cancer…."

An article comparing the U.S. versus the European Union's approach to chemicals in products (including in cosmetics, personal care products, and foods), which explains why a number of chemicals are banned in Europe, but allowed in the U.S. From Ensia:

BANNED IN EUROPE, SAFE IN THE U.S.

Atrazine, which the U.S. Environmental Protection Agency says is estimated to be the most heavily used herbicide in the U.S., was banned in Europe in 2003 due to concerns about its ubiquity as a water pollutant. 

The U.S. Food and Drug Administration places no restrictions on the use of formaldehyde or formaldehyde-releasing ingredients in cosmetics or personal care products. Yet formaldehyde-releasing agents are banned from these products in Japan and Sweden while their levels — and that of formaldehyde — are limited elsewhere in Europe. In the U.S., Minnesota has banned in-state sales of children’s personal care products that contain the chemical.

Use of lead-based interior paints was banned in France, Belgium and Austria in 1909. Much of Europe followed suit before 1940. It took the U.S. until 1978 to make this move, even though health experts had, for decades, recognized the potentially acute — even deadly — and irreversible hazards of lead exposure.

These are but a few examples of chemical products allowed to be used in the U.S. in ways other countries have decided present unacceptable risks of harm to the environment or human health. How did this happen? Are American products less safe than others? Are Americans more at risk of exposure to hazardous chemicals than, say, Europeans?

Not surprisingly, the answers are complex and the bottom line, far from clear-cut. One thing that is evident, however, is that “the policy approach in the U.S. and Europe is dramatically different."

A key element of the European Union’s chemicals management and environmental protection policies — and one that clearly distinguishes the EU’s approach from that of the U.S. federal government — is what’s called the precautionary principleThis principle, in the words of the European Commission, “aims at ensuring a higher level of environmental protection through preventative” decision-making. In other words, it says that when there is substantial, credible evidence of danger to human or environmental health, protective action should be taken despite continuing scientific uncertainty.

In contrast, the U.S. federal government’s approach to chemicals management sets a very high bar for the proof of harm that must be demonstrated before regulatory action is taken.

This is true of the U.S. Toxic Substances Control Act, the federal law that regulates chemicals used commercially in the U.S. The European law regulating chemicals in commerce, known as REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals), requires manufacturers to submit a full set of toxicity data to the European Chemical Agency before a chemical can be approved for use. U.S. federal law requires such information to be submitted for new chemicals, but leaves a huge gap in terms of what’s known about the environmental and health effects for chemicals already in use. Chemicals used in cosmetics or as food additives or pesticides are covered by other U.S. laws — but these laws, too, have high burdens for proof of harm and, like TSCA, do not incorporate a precautionary approach.

While FDA approval is required for food additives, the agency relies on studies performed by the companies seeking approval of chemicals they manufacture or want to use in making determinations about food additive safety. Natural Resources Defense Council senior scientist Maricel Maffini and NRDC senior attorney Tom Neltner “No other developed country that we know of has a similar system in which companies can decide the safety of chemicals put directly into food,” says Maffini.  The two point to a number of food additives allowed in the U.S. that other countries have deemed unsafe

Reliance on voluntary measures is a hallmark of the U.S. approach to chemical regulation. In many cases, when it comes to eliminating toxic chemicals from U.S. consumer products, manufacturers’ and retailers’ own policies — often driven by consumer demand or by regulations outside the U.S. or at the state and local level — are moving faster than U.S. federal policy. 

Cosmetics regulations are more robust in the EU than here,” says Environmental Defense Fund health program director Sarah Vogel. U.S. regulators largely rely on industry information, she says. Industry performs copious testing, but current law does not require that cosmetic ingredients be free of certain adverse health effects before they go on the market. (FDA regulations, for example, do not specifically prohibit the use of carcinogens, mutagens or endocrine-disrupting chemicals.) 

For the FDA to restrict a product or chemical ingredient from cosmetics or personal care products involves a typically long and drawn-out process. What it does more often is to issue advisories.

At the same time, built into the U.S. chemical regulatory system is a large deference to industry. Central to current U.S. policy are cost-benefit analyses with very high bars for proof of harm rather than a proof of safety for entry onto the market. Voluntary measures have moved many unsafe chemical products off store shelves and out of use, but our requirements for proof of harm and the American historical political aversion to precaution mean we often wait far longer than other countries to act.

Another study finding risks from too low vitamin D levels. From Science Daily:

Lower vitamin D level in blood linked to higher premature death rate

Researchers at the University of California, San Diego School of Medicine have found that persons with lower blood levels of vitamin D were twice as likely to die prematurely as people with higher blood levels of vitamin D.

The finding, published in the June 12 issue of American Journal of Public Health, was based on a systematic review of 32 previous studies that included analyses of vitamin D, blood levels and human mortality rates. The specific variant of vitamin D assessed was 25-hydroxyvitamin D, the primary form found in blood.

This new finding is based on the association of low vitamin D with risk of premature death from all causes, not just bone diseases."

Garland said the blood level amount of vitamin D associated with about half of the death rate was 30 ng/ml. He noted that two-thirds of the U.S. population has an estimated blood vitamin D level below 30 ng/ml."This study should give the medical community and public substantial reassurance that vitamin D is safe when used in appropriate doses up to 4,000 International Units (IU) per day," said Heather Hofflich, DO, professor in the UC San Diego School of Medicine's Department of Medicine.

The average age when the blood was drawn in this study was 55 years; the average length of follow-up was nine years. The study included residents of 14 countries, including the United States, and data from 566,583 participants.

Interesting to think of bacteria and biofilms (bacterial communities resistant to treatment) involved in stress related heart attacks. From Science Daily:

Bacteria help explain why stress, fear trigger heart attacks

Scientists believe they have an explanation for the axiom that stress, emotional shock, or overexertion may trigger heart attacks in vulnerable people. Hormones released during these events appear to cause bacterial biofilms on arterial walls to disperse, allowing plaque deposits to rupture into the bloodstream, according to research published in published in mBio®, the online open-access journal of the American Society for Microbiology.

"Our hypothesis fitted with the observation that heart attack and stroke often occur following an event where elevated levels of catecholamine hormones are released into the blood and tissues, such as occurs during sudden emotional shock or stress, sudden exertion or over-exertion" said David Davies of Binghamton University, Binghamton, New York, an author on the study.

Davies and his colleagues isolated and cultured different species of bacteria from diseased carotid arteries that had been removed from patients with atherosclerosis. Their results showed multiple bacterial species living as biofilms in the walls of every atherosclerotic (plaque-covered) carotid artery tested.

In normal conditions, biofilms are adherent microbial communities that are resistant to antibiotic treatment and clearance by the immune system. However, upon receiving a molecular signal, biofilms undergo dispersion, releasing enzymes to digest the scaffolding that maintains the bacteria within the biofilm. These enzymes have the potential to digest the nearby tissues that prevent the arterial plaque deposit from rupturing into the bloodstream. According to Davies, this could provide a scientific explanation for the long-held belief that heart attacks can be triggered by a stress, a sudden shock, or overexertion.

To test this theory they added norepinephrine, at a level that would be found in the body following stress or exertion, to biofilms formed on the inner walls of silicone tubing."At least one species of bacteria -- Pseudomonas aeruginosa -- commonly associated with carotid arteries in our studies, was able to undergo a biofilm dispersion response when exposed to norepinephrine, a hormone responsible for the fight-or-flight response in humans," said Davies. Because the biofilms are closely bound to arterial plaques, the dispersal of a biofilm could cause the sudden release of the surrounding arterial plaque, triggering a heart attack.

To their knowledge, this is the first direct observation of biofilm bacteria within a carotid arterial plaque deposit, says Davies. This research suggests that bacteria should be considered to be part of the overall pathology of atherosclerosis and management of bacteria within an arterial plaque lesion may be as important as managing cholesterol.

Note the red biofilm bacterial colonies within the diseased arterial wall:

Bacteria stained with a fluorescent bacterial DNA probe show up as red biofilm microcolonies within the green tissues of a diseased carotid arterial wall.

An interesting link. The thinking is that the number of moles (cutaneous nevi) are linked to higher levels of sex hormones. From Medical Xpress:

Moles linked to risk for breast cancer

Cutaneous nevi, commonly known as moles, may be a novel predictor of breast cancer, according to two studies published in this week's PLOS Medicine. Jiali Han and colleagues from Indiana University and Harvard University, United States, and Marina Kvaskoff and colleagues from INSERM, France, report that women with a greater number of nevi are more likely to develop breast cancer.

The researchers reached these conclusions by using data from two large prospective cohorts– the Nurses' Health Study in the United States, including 74,523 female nurses followed for 24 years, and the E3N Teachers' Study Cohort in France, including 89,902 women followed for 18 years. 

In the Nurses' Health Study, Han and colleagues asked study participants to report the number of nevi >3mm on their left arm at the initial assessment. They observed that women with 15 or more nevi were 35% more likely to be diagnosed with breast cancer than women who reported no nevi, corresponding to an absolute risk of developing breast cancer of 8.48% in women with no nevi and 11.4% for women with 15 or more nevi. In a subgroup of women, they observed that postmenopausal women with six or more nevi had higher blood levels of estrogen and testosterone than women with no nevi, and that the association between nevi and breast cancer risk disappeared after adjustment for hormone levels.

In the E3N Study, including mostly teachers, Kvaskoff and colleagues asked study participants to report whether they had no, a few, many, or very many moles. They observed that women with "very many" nevi had a 13% higher breast cancer risk than women reporting no nevi, although the association was no longer significant after adjusting for known breast cancer risk factors, especially benign breast disease or family history of breast cancer, which were themselves associated with nevi number.

These studies do not suggest that nevi cause breast cancer, but raise the possibility that nevi are affected by levels of sex hormones, which may be involved in the development of breast cancer. The findings do suggest that the number of nevi could be used as a marker of breast cancer risk, but it is unclear whether or how this information would improve risk estimation based on established risk factors. The accuracy of the findings is limited by the use of self-reported data on nevus numbers. Moreover, these findings may not apply to non-white women given that these studies involved mostly white participants.

From Medical Xpress:

Estimated risk of breast cancer increases as red meat intake increases

Higher red meat intake in early adulthood might be associated with an increased risk of breast cancer, and women who eat more legumes—such as peas, beans and lentils—poultry, nuts and fish might be at lower risk in later life, suggests a paper published BMJ today.

So far, studies have suggested no significant association between  intake and breast cancer. However, most have been based on diet during midlife and later, and many lines of evidence suggest that some exposures, potentially including dietary factors, may have greater effects on the development of breast cancer during early adulthood.

So a team of US researchers investigated the association between dietary protein sources in early adulthood and risk of breast cancer. They analysed data from 88,803 premenopausal women (aged 26 to 45) taking part in the Nurses' Health Study II who completed a questionnaire on diet in 1991. Adolescent food intake was also measured and included foods that were commonly eaten from 1960 to 1980, when these women would have been in high school. 

Medical records identified 2,830 cases of breast cancer during 20 years of follow-up.

This translated to an estimate that higher intake of red meat was associated with a 22% increased risk of breast cancer overall. Each additional serving per day of red meat was associated with a 13% increase in risk of breast cancer (12% in premenopausal and 8% in postmenopausal women).

In contrast, estimates showed a lower risk of breast cancer in postmenopausal women with higher consumption of poultry. Substituting one serving per day of poultry for one serving per day of red meat - in the statistical model - was associated with a 17% lower risk of breast cancer overall and a 24% lower risk of postmenopausal breast cancer.

Furthermore, substituting one serving per day of combined legumes, nuts, poultry, and fish for one serving per day of red meat was associated with a 14% lower risk of breast cancer overall and premenopausal breast cancer.

The authors conclude that higher red meat intake in early adulthood "may be a risk factor for breast cancer, and replacing red meat with a combination of legumes, poultry, nuts and fish may reduce the risk of breast cancer." 

Another benefit to exercising - more microbial diversity. From Medscape:

Exercise Linked to More Diverse Intestinal Microbiome

Professional athletes are big winners when it comes to their gut microflora, suggesting a beneficial effect of exercise on gastrointestinal health, investigators report in an article published online June 9 in Gut.

DNA sequencing of fecal samples from players in an international rugby union team showed considerably greater diversity of gut bacteria than samples from people who are more sedentary.

Having a gut populated with myriad species of bacteria is thought by nutritionists and gastroenterologic researchers to be a sign of good health. Conversely, the guts of obese people have consistently been found to contain fewer species of bacteria, note Siobhan F. Clarke, PhD, from the Teagasc Food Research Centre, Moorepark, Fermoy. "Our findings show that a combination of exercise and diet impacts on gut microbial diversity. In particular, the enhanced diversity of the microbiota correlates with exercise and dietary protein consumption in the athlete group," the authors write.

The investigators used 16S ribosomal RNA amplicon sequencing to evaluate stool and blood samples from 40 male elite professional rugby players (mean age, 29 years) and 46 healthy age-matched control participants. 

Relative to control participants with a high BMI, athletes and control participants with a low BMI had improved metabolic markers. In addition, although athletes had significantly increased levels of creatine kinase, they also had overall lower levels of inflammatory markers than either of the control groups.

Athletes were also found to have more diverse gut microbiota than controls, with organisms in approximately 22 different phyla, 68 families, and 113 genera. Participants with a low BMI were colonized by organisms in just 11 phyla, 33 families, and 65 genera, and participants with a high BMI had even fewer organisms in only 9 phyla, 33 families, and 61 genera.

The professional rugby players, as the investigators expected, had significantly higher levels of total energy intake than the control participants, with protein accounting for 22% of their total intake compared with 16% for control participants with a low BMI and 15% for control participants with a high BMI. When the authors looked for correlations between health parameters and diet with various microbes or microbial diversity, they found significant positive association between microbial diversity and protein intake, creatine kinase levels, and urea.