Today of strava firsts

Talk about first for me. Ran over a half marathon (13.1 miles), got my climbing badge, and am 0.1 km under 200km goal.

I am not a young thing. I could not do this when I was in best of shape. In 2016 when I decided that I was fat, and I was going to lose weight to be healthy walking a few miles a day was an accomplishment.

I have some fanticy accomplishements that I wan to do: 5K, 10K, half marathon, Triautholon. You know what? They do not seem too inimidating like they did in 2016. I just need to sign up and participate.

It is funny to me, but the badges on strava do have meaning. You do not get them for showing up, they are earned by doing.

Cave Fire

Yesterday around 4:30pm, I got this text:

Fire dept enroute for Fire in the Highway 154 area on the Santa Barbara Side.

When I got out of work I took this picture from SBCC East Campus around 5pm November 25.

Cave Fire from SBCC

By 7pm ish I took this at 2837 De La Vina St Santa Barbara, CA

Cave Fire form De la Vina

I have a friend who lives in the area. I am glad she is more prepared than I am and high tailed it. Her house is not in danger, but Santa Barbara is really just one bad wind change away from being burned down.

The air was clean this morning, but I can smell the fire now. Time to find the N95 masks that I bought from the last fire.

Now for the editorial. We have a good airport. However, fire fighting aircraft is located in Santa Maria and Los Angeles. I am thankfult to Los Angeles for sharing their resources last night. Fires like this get out of hand because of lack of quick response. Santa Barbara USED to have fire fighting aircraft. There is a part of me that believes that we would have such aircraft if it were important to our representatives.

A note: I think growing up in in Santa Barbara, I am remeber, how fast response was. Even with resources close, fires could get out of hand. This fire went from a few acres to over 3,000 in hours. Our fire department was on top of it. A single helecopter around three hours later. Major air assets around 16 hours later. It was scarry last night because there was gale warnings (34 to 47 knots) and wind direction could have swepted the fire through town.

I am going to say this, I am appreciative of the effort to get the fire under control. It is first rate.

Lifestyle changes to prevent Alzheiner's

Just read A New Treatment for Alzheimer's? It Starts With Lifestyle by Linda Marsa.

I feel that we should all read it. Partially so that we can better take care of our love ones and take better care of our selves. I don't know if it is correct or not, however, to me it makes a lot of sense.

It talks about bad actors. Bad actors include things that increase risk (probably noninclusive):

  • chronic stress

  • lack of exercise

  • lack of restorative sleep

  • toxins from molds

  • fat-laden fast foods

  • too much sugar

  • being pre-diabetic

Plus

  • sedentary lifestyles

  • poor eating habits

  • Type 2 diabetes

  • insulin resistance

  • skyrocketing obesity (body positivity or it is ok to be medically obese ??)

Summary protocal to combat the problem:

  • Optimizing sleep and getting at least eight hours of shut-eye every night.

  • Fasting at least 12 hours a day; patients usually don’t eat anything after 7 p.m. until the next morning.

  • Frequent yoga and meditation sessions to relieve stress.

  • Aerobic exercise for 30 to 60 minutes, at least five times a week.

  • Brain training exercises for 30 minutes, three times a week.

  • Eating a mostly plant-based diet: broccoli, cauliflower, Brussels sprouts, leafy green vegetables (kale, spinach, lettuce).

  • Cutting out high-mercury fish: tuna, shark and swordfish.

  • Drinking plenty of water.

  • Eliminating gluten and sugars. Cutting out simple carbs (bread, pasta, rice, cookies, cakes, candy, sodas).

This is just a short summary. I hope you read the full article. I plan to add more to this post, just that right now I other responcibilites are calling. I hope to update later.

2019 Trending Value Stocks as up November 7th

In the beginning of the year, I had created a real money port using O'Shaughnessy Trending value. I am a big fan of his What Works in Wallstreet book. In August I had liquidated the possion and just replaced it with the current Small Dogs of the Dow. The Small Dogs of the Dow are not as good as just buy the ETF SPY which is my bench mark. But it is good.

Part of the reason why I am doing this post is that I am starting to feel that I could have purchased SPY when I started investing again in late 1999, made more money and had less headaches by just buying SPY and let it ride. The Market is smarter than I am. It is more for my Investment journey since I feel that I need to be more conservative in my choices.

This post compares three choices that I was thinking about January 1, 2019: Dogs of the Dow, Trending Value, SPY. I had bet very wrong with Trending Value.

The Trending Value stocks were generated using AAII Stock Investor Pro.

In [3]:
#Load Libaries
pacman::p_load("quantmod", "tseries", "PerformanceAnalytics")

Trending Value Stocks that I purchase. Barns & Noble is commented out since it was purchased by Elliott June 7th Removing is oversimplification but still gives me a picture of performance.

In [4]:
trendvalueSymbols <-
  c(
    "AFL",
    "AGO",
    "ANAT",
    # "BKS", # Purchased
    "CHA",
    "CHL",
    "CLW",
    "CSIQ",
    "CTB",
    "EIG",
    "ELP",
    "GHC",
    "HRB",
    "KEN",
    "KT",
    "NRP",
    "OFG",
    "PDLI",
    "REGI",
    "SBS",
    "SCVL",
    "SIM",
    "SKM",
    "UAL",
    "VIV"
  )

Dogs of the Dow for 2019

In [5]:
DoDSymbols <-
  c("IBM", "XOM", "VZ", "CVX", "PFE", "KO", "JPM", "PG", "CSCO", "MRK")

Benchmark: SPY

In [6]:
SpyderSymbols <- c("SPY")

Get Stock Data

In [7]:
options("getSymbols.warning4.0"=FALSE)
getSymbols(trendvalueSymbols,
           src = 'yahoo',
           from = '2019-01-08',
           to = '2019-11-08')
getSymbols(SpyderSymbols,
           src = 'yahoo',
           from = '2019-01-08',
           to = '2019-11-08')
getSymbols(DoDSymbols,
           src = 'yahoo',
           from = '2019-01-08',
           to = '2019-11-08')
'getSymbols' currently uses auto.assign=TRUE by default, but will
use auto.assign=FALSE in 0.5-0. You will still be able to use
'loadSymbols' to automatically load data. getOption("getSymbols.env")
and getOption("getSymbols.auto.assign") will still be checked for
alternate defaults.

This message is shown once per session and may be disabled by setting 
options("getSymbols.warning4.0"=FALSE). See ?getSymbols for details.

pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
  1. 'AFL'
  2. 'AGO'
  3. 'ANAT'
  4. 'CHA'
  5. 'CHL'
  6. 'CLW'
  7. 'CSIQ'
  8. 'CTB'
  9. 'EIG'
  10. 'ELP'
  11. 'GHC'
  12. 'HRB'
  13. 'KEN'
  14. 'KT'
  15. 'NRP'
  16. 'OFG'
  17. 'PDLI'
  18. 'REGI'
  19. 'SBS'
  20. 'SCVL'
  21. 'SIM'
  22. 'SKM'
  23. 'UAL'
  24. 'VIV'
'SPY'
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
pausing 1 second between requests for more than 5 symbols
  1. 'IBM'
  2. 'XOM'
  3. 'VZ'
  4. 'CVX'
  5. 'PFE'
  6. 'KO'
  7. 'JPM'
  8. 'PG'
  9. 'CSCO'
  10. 'MRK'

Put data in lists

In [8]:
pricesTV <- list()
pricesSPY <- list()
pricesDoD <- list()

for (i in 1:length(trendvalueSymbols)) {
  pricesTV[[i]] <- Ad(get(trendvalueSymbols[i]))
}
pricesTV <- do.call(cbind, pricesTV)
colnames(pricesTV) <- c(trendvalueSymbols)

for (i in 1:length(SpyderSymbols)) {
  pricesSPY[[i]] <- Ad(get(SpyderSymbols[i]))
}
pricesSPY <- do.call(cbind, pricesSPY)
colnames(pricesSPY) <- c(SpyderSymbols)

for (i in 1:length(DoDSymbols)) {
  pricesDoD[[i]] <- Ad(get(DoDSymbols[i]))
}
pricesDoD <- do.call(cbind, pricesDoD)
colnames(pricesDoD) <- c(DoDSymbols)

Generate Returns and prep for charting

In [9]:
# generate daily returns
returnsTV <- na.omit(ROC(pricesTV, 1, "discrete"))
returnsSPY <- na.omit(ROC(pricesSPY, 1, "discrete"))
returnsDoD <- na.omit(ROC(pricesDoD, 1, "discrete"))

#Prep for charting
portfolio.tv <-
  Return.portfolio(returnsTV, wealth.index = TRUE, verbose = TRUE)
portfolio.spy <-
  Return.portfolio(returnsSPY, wealth.index = TRUE, verbose = TRUE)
portfolio.dod <-
  Return.portfolio(returnsDoD, wealth.index = TRUE, verbose = TRUE)

portfolios.2 <-
  cbind(portfolio.tv$returns,
        portfolio.spy$returns,
        portfolio.dod$returns)
colnames(portfolios.2) <-
  c("Trending Value", "SPYders", "Dogs of Dow")

Chart

In [10]:
chart.CumReturns(
  portfolios.2,
  wealth.index = TRUE,
  legend.loc = "bottomright",
  main = "Growth of $1 investment",
  ylab = "$"
)
No description has been provided for this image

Returns

In [11]:
table.AnnualizedReturns(portfolios.2)
Trending Value SPYders Dogs of Dow
Annualized Return 0.0099 0.2643 0.1653
Annualized Std Dev 0.1276 0.1258 0.1158
Annualized Sharpe (Rf=0%) 0.0774 2.1010 1.4270

I don't have time to go into detail this morning. My day job calls. However the difference in returns is glaring. I will let you draw your own conclusions on this.

Jupyter test with R kernel

Just wanted to see if I could do a post with Nikola using an R kernel with Jupyter.

This is the test code from: https://docs.anaconda.com/anaconda/navigator/tutorials/r-lang/

In [1]:
library(dplyr)
iris
Attaching package: 'dplyr'

The following objects are masked from 'package:stats':

    filter, lag

The following objects are masked from 'package:base':

    intersect, setdiff, setequal, union

Sepal.Length Sepal.Width Petal.Length Petal.Width Species
5.1 3.5 1.4 0.2 setosa
4.9 3.0 1.4 0.2 setosa
4.7 3.2 1.3 0.2 setosa
4.6 3.1 1.5 0.2 setosa
5.0 3.6 1.4 0.2 setosa
5.4 3.9 1.7 0.4 setosa
4.6 3.4 1.4 0.3 setosa
5.0 3.4 1.5 0.2 setosa
4.4 2.9 1.4 0.2 setosa
4.9 3.1 1.5 0.1 setosa
5.4 3.7 1.5 0.2 setosa
4.8 3.4 1.6 0.2 setosa
4.8 3.0 1.4 0.1 setosa
4.3 3.0 1.1 0.1 setosa
5.8 4.0 1.2 0.2 setosa
5.7 4.4 1.5 0.4 setosa
5.4 3.9 1.3 0.4 setosa
5.1 3.5 1.4 0.3 setosa
5.7 3.8 1.7 0.3 setosa
5.1 3.8 1.5 0.3 setosa
5.4 3.4 1.7 0.2 setosa
5.1 3.7 1.5 0.4 setosa
4.6 3.6 1.0 0.2 setosa
5.1 3.3 1.7 0.5 setosa
4.8 3.4 1.9 0.2 setosa
5.0 3.0 1.6 0.2 setosa
5.0 3.4 1.6 0.4 setosa
5.2 3.5 1.5 0.2 setosa
5.2 3.4 1.4 0.2 setosa
4.7 3.2 1.6 0.2 setosa
... ... ... ... ...
6.9 3.2 5.7 2.3 virginica
5.6 2.8 4.9 2.0 virginica
7.7 2.8 6.7 2.0 virginica
6.3 2.7 4.9 1.8 virginica
6.7 3.3 5.7 2.1 virginica
7.2 3.2 6.0 1.8 virginica
6.2 2.8 4.8 1.8 virginica
6.1 3.0 4.9 1.8 virginica
6.4 2.8 5.6 2.1 virginica
7.2 3.0 5.8 1.6 virginica
7.4 2.8 6.1 1.9 virginica
7.9 3.8 6.4 2.0 virginica
6.4 2.8 5.6 2.2 virginica
6.3 2.8 5.1 1.5 virginica
6.1 2.6 5.6 1.4 virginica
7.7 3.0 6.1 2.3 virginica
6.3 3.4 5.6 2.4 virginica
6.4 3.1 5.5 1.8 virginica
6.0 3.0 4.8 1.8 virginica
6.9 3.1 5.4 2.1 virginica
6.7 3.1 5.6 2.4 virginica
6.9 3.1 5.1 2.3 virginica
5.8 2.7 5.1 1.9 virginica
6.8 3.2 5.9 2.3 virginica
6.7 3.3 5.7 2.5 virginica
6.7 3.0 5.2 2.3 virginica
6.3 2.5 5.0 1.9 virginica
6.5 3.0 5.2 2.0 virginica
6.2 3.4 5.4 2.3 virginica
5.9 3.0 5.1 1.8 virginica
In [2]:
library(ggplot2)
ggplot(data=iris, aes(x=Sepal.Length, y=Sepal.Width, color=Species)) + geom_point(size=3)
No description has been provided for this image

November 2019 pull-up challange

Well, I failed my October 2019 Pullup Challange. Going to try again in November to get up to 10 pullups a day by the end of the month.

November schedule:

Date

# of Pullups

11/1/19

1

11/2/19

1

11/3/19

2

11/4/19

2

11/5/19

2

11/6/19

3

11/7/19

3

11/8/19

3

11/9/19

4

11/10/19

4

11/11/19

4

11/13/19

5

11/14/19

5

11/15/19

6

11/16/19

6

11/17/19

6

11/18/19

7

11/19/19

7

11/20/19

7

11/21/19

8

11/22/19

8

11/23/19

8

11/24/19

9

11/25/19

9

11/26/19

9

11/27/19

10

11/28/19

10

11/29/19

10

Weekly Weight Check In: November 11, 2019

last year at this time, I was in the low 180's. Holiday tradition of gaining weight it. This year I am going to fight the pounds. Plus, I want to be 15% body fat when I do the Sprint triationlon August 2020. My current goal is to be 15% body fat August 3, 2020 or 171 pounds.

On Myfitnesspal, I am trying out checkins on the community board to help me with my weight loss. I might as well put it on my blog.

Weight will be reported using: exponentially smoothed moving average with 10% smoothing. (Hacker's Diet)

Today: 200.3 Last Monday: 201.1 Delta: 0.8

267 days to go or 0.76 pounds per week.

Weight loss over time 200.3200.3200.4200.4200.5200.5200.6200.6200.7200.7200.8200.8200.9200.9201201201.1201.111/411/11Weight loss over time201.112.3658738206025049.74601359440788411/4200.3630.6595648507277497.0466933150607411/11Weight

Just a IPython test

Just testing a anaconda jupyter notebook using examples from

https://blog.quantinsti.com/stock-market-data-analysis-python/

I just want to see how well it works with my Nikola blog

In [1]:
import pandas_datareader
pandas_datareader.__version__
Out[1]:
'0.8.0'
In [2]:
import pandas as pd
from pandas_datareader import data
# Set the start and end date
start_date = '1990-01-01'
end_date = '2019-02-01'
# Set the ticker
ticker = 'AMZN'
# Get the data
data = data.get_data_yahoo(ticker, start_date, end_date)
data.head()
Out[2]:
High Low Open Close Volume Adj Close
Date
1997-05-15 2.500000 1.927083 2.437500 1.958333 72156000.0 1.958333
1997-05-16 1.979167 1.708333 1.968750 1.729167 14700000.0 1.729167
1997-05-19 1.770833 1.625000 1.760417 1.708333 6106800.0 1.708333
1997-05-20 1.750000 1.635417 1.729167 1.635417 5467200.0 1.635417
1997-05-21 1.645833 1.375000 1.635417 1.427083 18853200.0 1.427083
In [3]:
import matplotlib.pyplot as plt
%matplotlib inline
data['Adj Close'].plot()
plt.show()
No description has been provided for this image
In [4]:
# Plot the adjusted close price
data['Adj Close'].plot(figsize=(10, 7))
# Define the label for the title of the figure
plt.title("Adjusted Close Price of %s" % ticker, fontsize=16)
# Define the labels for x-axis and y-axis
plt.ylabel('Price', fontsize=14)
plt.xlabel('Year', fontsize=14)
# Plot the grid lines
plt.grid(which="major", color='k', linestyle='-.', linewidth=0.5)
# Show the plot
plt.show()
No description has been provided for this image
In [5]:
# Plot the adjusted close price
data['Adj Close'].plot(figsize=(10, 7))
# Define the label for the title of the figure
plt.title("Adjusted Close Price of %s" % ticker, fontsize=16)
# Define the labels for x-axis and y-axis
plt.ylabel('Price', fontsize=14)
plt.xlabel('Year', fontsize=14)
# Plot the grid lines
plt.grid(which="major", color='k', linestyle='-.', linewidth=0.5)
# Show the plot
plt.show()
No description has been provided for this image

Ride Santa Barbara 100, a climb too far

Well, I actually tried the RIDESB100 this weekend. There where four routes: 39 miles, 100 km, 100 km with Gibraltar, and the century. I did not finish the ride. At around 43 miles and two miles to the top of Gibraltar, my legs gave out. The last climb before the top was just a killer. The final climb before the top was too much. At two miles away from the top, I just did not want to do the last climb. Also, my bike decided that it would not go into the lowest gear.

What I learned:

  • Have my bike checked out before the ride.

  • Have two water bottles on the bike.

  • Have some hydration tablets to put in the water.

  • Have some sort of concentrated calories that is easy to fit in the jersey pocket.

https://www.instagram.com/p/B3z3ZvFBXep/https://www.instagram.com/p/B3zzhbnB12Q/?utm_source=ig_web_copy_link