Merge branch 'dev' into feature/fix-prices-div-adjust

feature/fix-prices-div-adjust
ValueRaider 2023-04-11 21:12:24 +01:00 committed by GitHub
commit 462783a2a1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 665 additions and 539 deletions

View File

@ -1,6 +1,28 @@
Change Log
===========
0.2.16
------
Fix 'fast_info deprecated' msg appearing at Ticker() init
0.2.15
------
Restore missing Ticker.info keys #1480
0.2.14
------
Fix Ticker.info dict by fetching from API #1461
0.2.13
------
Price bug fixes:
- fetch big-interval with Capital Gains #1455
- merging dividends & splits with prices #1452
0.2.12
------
Disable annoying 'backup decrypt' msg
0.2.11
------
Fix history_metadata accesses for unusual symbols #1411

View File

@ -45,7 +45,7 @@ Yahoo! finance API is intended for personal use only.**
## News [2023-01-27]
Since December 2022 Yahoo has been encrypting the web data that `yfinance` scrapes for non-market data. Fortunately the decryption keys are available, although Yahoo moved/changed them several times hence `yfinance` breaking several times. `yfinance` is now better prepared for any future changes by Yahoo.
Why is Yahoo doing this? We don't know. Is it to stop scrapers? Maybe, so we've implemented changes to reduce load on Yahoo. In December we rolled out version 0.2 with optimised scraping. Then in 0.2.6 introduced `Ticker.fast_info`, providing much faster access to some `info` elements wherever possible e.g. price stats and forcing users to switch (sorry but we think necessary). `info` will continue to exist for as long as there are elements without a fast alternative.
Why is Yahoo doing this? We don't know. Is it to stop scrapers? Maybe, so we've implemented changes to reduce load on Yahoo. In December we rolled out version 0.2 with optimised scraping. ~Then in 0.2.6 introduced `Ticker.fast_info`, providing much faster access to some `info` elements wherever possible e.g. price stats and forcing users to switch (sorry but we think necessary). `info` will continue to exist for as long as there are elements without a fast alternative.~ `info` now fixed and much faster than before.
## Quick Start
@ -58,10 +58,8 @@ import yfinance as yf
msft = yf.Ticker("MSFT")
# get all stock info (slow)
# get all stock info
msft.info
# fast access to subset of stock info (opportunistic)
msft.fast_info
# get historical market data
hist = msft.history(period="1mo")
@ -154,6 +152,8 @@ msft.option_chain(..., proxy="PROXY_SERVER")
...
```
### Multiple tickers
To initialize multiple `Ticker` objects, use
```python
@ -167,7 +167,7 @@ tickers.tickers['AAPL'].history(period="1mo")
tickers.tickers['GOOG'].actions
```
### Fetching data for multiple tickers
To download price history into one table:
```python
import yfinance as yf
@ -180,8 +180,8 @@ data = yf.download("SPY AAPL", start="2017-01-01", end="2017-04-30")
yf.download(tickers = "SPY AAPL", # list of tickers
period = "1y", # time period
interval = "1d", # trading interval
ignore_tz = True, # ignore timezone when aligning data from different exchanges?
prepost = False) # download pre/post market hours data?
prepost = False, # download pre/post market hours data?
repair = True) # repair obvious price errors e.g. 100x?
```
Review the [Wiki](https://github.com/ranaroussi/yfinance/wiki) for more options and detail.
@ -206,11 +206,12 @@ Combine a `requests_cache` with rate-limiting to avoid triggering Yahoo's rate-l
from requests import Session
from requests_cache import CacheMixin, SQLiteCache
from requests_ratelimiter import LimiterMixin, MemoryQueueBucket
from pyrate_limiter import Duration, RequestRate, Limiter
class CachedLimiterSession(CacheMixin, LimiterMixin, Session):
""" """
pass
session = CachedLimiterSession(
per_second=0.9,
limiter=Limiter(RequestRate(2, Duration.SECOND*5), # max 2 requests per 5 seconds
bucket_class=MemoryQueueBucket,
backend=SQLiteCache("yfinance.cache"),
)
@ -231,21 +232,7 @@ yfinance?](https://stackoverflow.com/questions/63107801)
- How to download single or multiple tickers into a single
dataframe with single level column names and a ticker column
### Timezone cache store
When fetching price data, all dates are localized to stock exchange timezone.
But timezone retrieval is relatively slow, so yfinance attemps to cache them
in your users cache folder.
You can direct cache to use a different location with `set_tz_cache_location()`:
```python
import yfinance as yf
yf.set_tz_cache_location("custom/cache/location")
...
```
---
## `pandas_datareader` override
### `pandas_datareader` override
If your code uses `pandas_datareader` and you want to download data
faster, you can "hijack" `pandas_datareader.data.get_data_yahoo()`
@ -262,6 +249,18 @@ yf.pdr_override() # <== that's all it takes :-)
data = pdr.get_data_yahoo("SPY", start="2017-01-01", end="2017-04-30")
```
### Timezone cache store
When fetching price data, all dates are localized to stock exchange timezone.
But timezone retrieval is relatively slow, so yfinance attemps to cache them
in your users cache folder.
You can direct cache to use a different location with `set_tz_cache_location()`:
```python
import yfinance as yf
yf.set_tz_cache_location("custom/cache/location")
...
```
---
## Installation
@ -289,11 +288,15 @@ To install `yfinance` using `conda`, see
- [html5lib](https://pypi.org/project/html5lib) \>= 1.1
- [cryptography](https://pypi.org/project/cryptography) \>= 3.3.2
### Optional (if you want to use `pandas_datareader`)
#### Optional (if you want to use `pandas_datareader`)
- [pandas\_datareader](https://github.com/pydata/pandas-datareader)
\>= 0.4.0
## Developers: want to contribute?
`yfinance` relies on community to investigate bugs and contribute code. Developer guide: https://github.com/ranaroussi/yfinance/discussions/1084
---
### Legal Stuff

View File

@ -1,5 +1,5 @@
{% set name = "yfinance" %}
{% set version = "0.2.11" %}
{% set version = "0.2.16" %}
package:
name: "{{ name|lower }}"

View File

@ -230,6 +230,11 @@ class TestPriceHistory(unittest.TestCase):
print("{}-without-events missing these dates: {}".format(tkr, missing_from_df2))
raise
def test_monthlyWithEvents2(self):
# Simply check no exception from internal merge
tkr = "ABBV"
yf.Ticker("ABBV").history(period="max", interval="1mo")
def test_tz_dst_ambiguous(self):
# Reproduce issue #1100
try:
@ -381,6 +386,16 @@ class TestPriceHistory(unittest.TestCase):
df = dat.history(start=start, interval="1wk")
self.assertTrue((df.index.weekday == 0).all())
def test_aggregate_capital_gains(self):
# Setup
tkr = "FXAIX"
dat = yf.Ticker(tkr, session=self.session)
start = "2017-12-31"
end = "2019-12-31"
interval = "3mo"
df = dat.history(start=start, end=end, interval=interval)
class TestPriceRepair(unittest.TestCase):
session = None
@ -464,6 +479,9 @@ class TestPriceRepair(unittest.TestCase):
f_1 = ratio == 1
self.assertTrue((f_100 | f_1).all())
self.assertTrue("Repaired?" in df_repaired.columns)
self.assertFalse(df_repaired["Repaired?"].isna().any())
def test_repair_100x_weekly_preSplit(self):
# PNL.L has a stock-split in 2022. Sometimes requesting data before 2022 is not split-adjusted.
@ -521,6 +539,9 @@ class TestPriceRepair(unittest.TestCase):
f_1 = ratio == 1
self.assertTrue((f_100 | f_1).all())
self.assertTrue("Repaired?" in df_repaired.columns)
self.assertFalse(df_repaired["Repaired?"].isna().any())
def test_repair_100x_daily(self):
tkr = "PNL.L"
dat = yf.Ticker(tkr, session=self.session)
@ -563,6 +584,9 @@ class TestPriceRepair(unittest.TestCase):
f_1 = ratio == 1
self.assertTrue((f_100 | f_1).all())
self.assertTrue("Repaired?" in df_repaired.columns)
self.assertFalse(df_repaired["Repaired?"].isna().any())
def test_repair_zeroes_daily(self):
tkr = "BBIL.L"
dat = yf.Ticker(tkr, session=self.session)
@ -590,6 +614,9 @@ class TestPriceRepair(unittest.TestCase):
for c in ["Open", "Low", "High", "Close"]:
self.assertTrue(_np.isclose(repaired_df[c], correct_df[c], rtol=1e-8).all())
self.assertTrue("Repaired?" in repaired_df.columns)
self.assertFalse(repaired_df["Repaired?"].isna().any())
def test_repair_zeroes_hourly(self):
tkr = "INTC"
dat = yf.Ticker(tkr, session=self.session)
@ -621,6 +648,9 @@ class TestPriceRepair(unittest.TestCase):
print(repaired_df[c] - correct_df[c])
raise
self.assertTrue("Repaired?" in repaired_df.columns)
self.assertFalse(repaired_df["Repaired?"].isna().any())
if __name__ == '__main__':
unittest.main()

View File

@ -42,7 +42,7 @@ from . import shared
from .scrapers.analysis import Analysis
from .scrapers.fundamentals import Fundamentals
from .scrapers.holders import Holders
from .scrapers.quote import Quote
from .scrapers.quote import Quote, FastInfo
import json as _json
_BASE_URL_ = 'https://query2.finance.yahoo.com'
@ -51,463 +51,6 @@ _ROOT_URL_ = 'https://finance.yahoo.com'
logger = logging.getLogger(__name__)
class FastInfo:
# Contain small subset of info[] items that can be fetched faster elsewhere.
# Imitates a dict.
def __init__(self, tickerBaseObject):
self._tkr = tickerBaseObject
self._prices_1y = None
self._prices_1wk_1h_prepost = None
self._prices_1wk_1h_reg = None
self._md = None
self._currency = None
self._quote_type = None
self._exchange = None
self._timezone = None
self._shares = None
self._mcap = None
self._open = None
self._day_high = None
self._day_low = None
self._last_price = None
self._last_volume = None
self._prev_close = None
self._reg_prev_close = None
self._50d_day_average = None
self._200d_day_average = None
self._year_high = None
self._year_low = None
self._year_change = None
self._10d_avg_vol = None
self._3mo_avg_vol = None
# attrs = utils.attributes(self)
# self.keys = attrs.keys()
# utils.attributes is calling each method, bad! Have to hardcode
_properties = ["currency", "quote_type", "exchange", "timezone"]
_properties += ["shares", "market_cap"]
_properties += ["last_price", "previous_close", "open", "day_high", "day_low"]
_properties += ["regular_market_previous_close"]
_properties += ["last_volume"]
_properties += ["fifty_day_average", "two_hundred_day_average", "ten_day_average_volume", "three_month_average_volume"]
_properties += ["year_high", "year_low", "year_change"]
# Because released before fixing key case, need to officially support
# camel-case but also secretly support snake-case
base_keys = [k for k in _properties if not '_' in k]
sc_keys = [k for k in _properties if '_' in k]
self._sc_to_cc_key = {k:utils.snake_case_2_camelCase(k) for k in sc_keys}
self._cc_to_sc_key = {v:k for k,v in self._sc_to_cc_key.items()}
self._public_keys = sorted(base_keys + list(self._sc_to_cc_key.values()))
self._keys = sorted(self._public_keys + sc_keys)
# dict imitation:
def keys(self):
return self._public_keys
def items(self):
return [(k,self[k]) for k in self._public_keys]
def values(self):
return [self[k] for k in self._public_keys]
def get(self, key, default=None):
if key in self.keys():
if key in self._cc_to_sc_key:
key = self._cc_to_sc_key[key]
return self[key]
return default
def __getitem__(self, k):
if not isinstance(k, str):
raise KeyError(f"key must be a string")
if not k in self._keys:
raise KeyError(f"'{k}' not valid key. Examine 'FastInfo.keys()'")
if k in self._cc_to_sc_key:
k = self._cc_to_sc_key[k]
return getattr(self, k)
def __contains__(self, k):
return k in self.keys()
def __iter__(self):
return iter(self.keys())
def __str__(self):
return "lazy-loading dict with keys = " + str(self.keys())
def __repr__(self):
return self.__str__()
def toJSON(self, indent=4):
d = {k:self[k] for k in self.keys()}
return _json.dumps({k:self[k] for k in self.keys()}, indent=indent)
def _get_1y_prices(self, fullDaysOnly=False):
if self._prices_1y is None:
self._prices_1y = self._tkr.history(period="380d", div_adjust=False, debug=False, keepna=True)
self._md = self._tkr.get_history_metadata()
try:
ctp = self._md["currentTradingPeriod"]
self._today_open = pd.to_datetime(ctp["regular"]["start"], unit='s', utc=True).tz_convert(self.timezone)
self._today_close = pd.to_datetime(ctp["regular"]["end"], unit='s', utc=True).tz_convert(self.timezone)
self._today_midnight = self._today_close.ceil("D")
except:
self._today_open = None
self._today_close = None
self._today_midnight = None
pass
if self._prices_1y.empty:
return self._prices_1y
dnow = pd.Timestamp.utcnow().tz_convert(self.timezone).date()
d1 = dnow
d0 = (d1 + _datetime.timedelta(days=1)) - utils._interval_to_timedelta("1y")
if fullDaysOnly and self._exchange_open_now():
# Exclude today
d1 -= utils._interval_to_timedelta("1d")
return self._prices_1y.loc[str(d0):str(d1)]
def _get_1wk_1h_prepost_prices(self):
if self._prices_1wk_1h_prepost is None:
self._prices_1wk_1h_prepost = self._tkr.history(period="1wk", interval="1h", div_adjust=False, prepost=True, debug=False)
return self._prices_1wk_1h_prepost
def _get_1wk_1h_reg_prices(self):
if self._prices_1wk_1h_reg is None:
self._prices_1wk_1h_reg = self._tkr.history(period="1wk", interval="1h", div_adjust=False, prepost=False, debug=False)
return self._prices_1wk_1h_reg
def _get_exchange_metadata(self):
if self._md is not None:
return self._md
self._get_1y_prices()
self._md = self._tkr.get_history_metadata()
return self._md
def _exchange_open_now(self):
t = pd.Timestamp.utcnow()
self._get_exchange_metadata()
# if self._today_open is None and self._today_close is None:
# r = False
# else:
# r = self._today_open <= t and t < self._today_close
# if self._today_midnight is None:
# r = False
# elif self._today_midnight.date() > t.tz_convert(self.timezone).date():
# r = False
# else:
# r = t < self._today_midnight
last_day_cutoff = self._get_1y_prices().index[-1] + _datetime.timedelta(days=1)
last_day_cutoff += _datetime.timedelta(minutes=20)
r = t < last_day_cutoff
# print("_exchange_open_now() returning", r)
return r
@property
def currency(self):
if self._currency is not None:
return self._currency
md = self._get_exchange_metadata()
if "currency" in md:
self._currency = md["currency"]
return self._currency
@property
def quote_type(self):
if self._quote_type is not None:
return self._quote_type
md = self._get_exchange_metadata()
if "instrumentType" in md:
self._quote_type = md["instrumentType"]
return self._quote_type
@property
def exchange(self):
if self._exchange is not None:
return self._exchange
md = self._get_exchange_metadata()
if "exchangeName" in md:
self._exchange = md["exchangeName"]
return self._exchange
@property
def timezone(self):
if self._timezone is not None:
return self._timezone
md = self._get_exchange_metadata()
if "exchangeTimezoneName" in md:
self._timezone = md["exchangeTimezoneName"]
return self._timezone
@property
def shares(self):
if self._shares is not None:
return self._shares
shares = self._tkr.get_shares_full(start=pd.Timestamp.utcnow().date()-pd.Timedelta(days=548))
if shares is None:
# Requesting 18 months failed, so fallback to shares which should include last year
shares = self._tkr.get_shares()
if shares is not None:
if isinstance(shares, pd.DataFrame):
shares = shares[shares.columns[0]]
self._shares = int(shares.iloc[-1])
return self._shares
@property
def last_price(self):
if self._last_price is not None:
return self._last_price
prices = self._get_1y_prices()
if prices.empty:
md = self._get_exchange_metadata()
if "regularMarketPrice" in md:
self._last_price = md["regularMarketPrice"]
else:
self._last_price = float(prices["Close"].iloc[-1])
if _np.isnan(self._last_price):
md = self._get_exchange_metadata()
if "regularMarketPrice" in md:
self._last_price = md["regularMarketPrice"]
return self._last_price
@property
def previous_close(self):
if self._prev_close is not None:
return self._prev_close
prices = self._get_1wk_1h_prepost_prices()
fail = False
if prices.empty:
fail = True
else:
prices = prices[["Close"]].groupby(prices.index.date).last()
if prices.shape[0] < 2:
# Very few symbols have previousClose despite no
# no trading data e.g. 'QCSTIX'.
fail = True
else:
self._prev_close = float(prices["Close"].iloc[-2])
if fail:
# Fallback to original info[] if available.
self._tkr.info # trigger fetch
k = "previousClose"
if self._tkr._quote._retired_info is not None and k in self._tkr._quote._retired_info:
self._prev_close = self._tkr._quote._retired_info[k]
return self._prev_close
@property
def regular_market_previous_close(self):
if self._reg_prev_close is not None:
return self._reg_prev_close
prices = self._get_1y_prices()
if prices.shape[0] == 1:
# Tiny % of tickers don't return daily history before last trading day,
# so backup option is hourly history:
prices = self._get_1wk_1h_reg_prices()
prices = prices[["Close"]].groupby(prices.index.date).last()
if prices.shape[0] < 2:
# Very few symbols have regularMarketPreviousClose despite no
# no trading data. E.g. 'QCSTIX'.
# So fallback to original info[] if available.
self._tkr.info # trigger fetch
k = "regularMarketPreviousClose"
if self._tkr._quote._retired_info is not None and k in self._tkr._quote._retired_info:
self._reg_prev_close = self._tkr._quote._retired_info[k]
else:
self._reg_prev_close = float(prices["Close"].iloc[-2])
return self._reg_prev_close
@property
def open(self):
if self._open is not None:
return self._open
prices = self._get_1y_prices()
if prices.empty:
self._open = None
else:
self._open = float(prices["Open"].iloc[-1])
if _np.isnan(self._open):
self._open = None
return self._open
@property
def day_high(self):
if self._day_high is not None:
return self._day_high
prices = self._get_1y_prices()
if prices.empty:
self._day_high = None
else:
self._day_high = float(prices["High"].iloc[-1])
if _np.isnan(self._day_high):
self._day_high = None
return self._day_high
@property
def day_low(self):
if self._day_low is not None:
return self._day_low
prices = self._get_1y_prices()
if prices.empty:
self._day_low = None
else:
self._day_low = float(prices["Low"].iloc[-1])
if _np.isnan(self._day_low):
self._day_low = None
return self._day_low
@property
def last_volume(self):
if self._last_volume is not None:
return self._last_volume
prices = self._get_1y_prices()
self._last_volume = None if prices.empty else int(prices["Volume"].iloc[-1])
return self._last_volume
@property
def fifty_day_average(self):
if self._50d_day_average is not None:
return self._50d_day_average
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.empty:
self._50d_day_average = None
else:
n = prices.shape[0]
a = n-50
b = n
if a < 0:
a = 0
self._50d_day_average = float(prices["Close"].iloc[a:b].mean())
return self._50d_day_average
@property
def two_hundred_day_average(self):
if self._200d_day_average is not None:
return self._200d_day_average
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.empty:
self._200d_day_average = None
else:
n = prices.shape[0]
a = n-200
b = n
if a < 0:
a = 0
self._200d_day_average = float(prices["Close"].iloc[a:b].mean())
return self._200d_day_average
@property
def ten_day_average_volume(self):
if self._10d_avg_vol is not None:
return self._10d_avg_vol
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.empty:
self._10d_avg_vol = None
else:
n = prices.shape[0]
a = n-10
b = n
if a < 0:
a = 0
self._10d_avg_vol = int(prices["Volume"].iloc[a:b].mean())
return self._10d_avg_vol
@property
def three_month_average_volume(self):
if self._3mo_avg_vol is not None:
return self._3mo_avg_vol
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.empty:
self._3mo_avg_vol = None
else:
dt1 = prices.index[-1]
dt0 = dt1 - utils._interval_to_timedelta("3mo") + utils._interval_to_timedelta("1d")
self._3mo_avg_vol = int(prices.loc[dt0:dt1, "Volume"].mean())
return self._3mo_avg_vol
@property
def year_high(self):
if self._year_high is not None:
return self._year_high
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.empty:
prices = self._get_1y_prices(fullDaysOnly=False)
self._year_high = float(prices["High"].max())
return self._year_high
@property
def year_low(self):
if self._year_low is not None:
return self._year_low
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.empty:
prices = self._get_1y_prices(fullDaysOnly=False)
self._year_low = float(prices["Low"].min())
return self._year_low
@property
def year_change(self):
if self._year_change is not None:
return self._year_change
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.shape[0] >= 2:
self._year_change = (prices["Close"].iloc[-1] - prices["Close"].iloc[0]) / prices["Close"].iloc[0]
self._year_change = float(self._year_change)
return self._year_change
@property
def market_cap(self):
if self._mcap is not None:
return self._mcap
try:
shares = self.shares
except Exception as e:
if "Cannot retrieve share count" in str(e):
shares = None
else:
raise
if shares is None:
# Very few symbols have marketCap despite no share count.
# E.g. 'BTC-USD'
# So fallback to original info[] if available.
self._tkr.info
k = "marketCap"
if self._tkr._quote._retired_info is not None and k in self._tkr._quote._retired_info:
self._mcap = self._tkr._quote._retired_info[k]
else:
self._mcap = float(shares * self.last_price)
return self._mcap
class TickerBase:
def __init__(self, ticker, session=None):
self.ticker = ticker.upper()
@ -538,7 +81,7 @@ class TickerBase:
self._quote = Quote(self._data)
self._fundamentals = Fundamentals(self._data)
self._fast_info = FastInfo(self)
self._fast_info = None
def stats(self, proxy=None):
ticker_url = "{}/{}".format(self._scrape_url, self.ticker)
@ -563,11 +106,13 @@ class TickerBase:
Valid intervals: 1m,2m,5m,15m,30m,60m,90m,1h,1d,5d,1wk,1mo,3mo
Intraday data cannot extend last 60 days
start: str
Download start date string (YYYY-MM-DD) or _datetime.
Download start date string (YYYY-MM-DD) or _datetime, inclusive.
Default is 1900-01-01
E.g. for start="2020-01-01", the first data point will be on "2020-01-01"
end: str
Download end date string (YYYY-MM-DD) or _datetime.
Download end date string (YYYY-MM-DD) or _datetime, exclusive.
Default is now
E.g. for end="2023-01-01", the last data point will be on "2022-12-31"
prepost : bool
Include Pre and Post market data in results?
Default is False
@ -792,30 +337,28 @@ class TickerBase:
if not expect_capital_gains:
capital_gains = None
if start is not None:
# Note: use pandas Timestamp as datetime.utcfromtimestamp has bugs on windows
# https://github.com/python/cpython/issues/81708
startDt = _pd.Timestamp(start, unit='s')
if dividends is not None:
dividends = dividends[dividends.index>=startDt]
if capital_gains is not None:
capital_gains = capital_gains[capital_gains.index>=startDt]
if splits is not None:
splits = splits[splits.index >= startDt]
if end is not None:
endDt = _pd.Timestamp(end, unit='s')
if dividends is not None:
dividends = dividends[dividends.index<endDt]
if capital_gains is not None:
capital_gains = capital_gains[capital_gains.index<endDt]
if splits is not None:
splits = splits[splits.index < endDt]
if splits is not None:
splits = utils.set_df_tz(splits, interval, tz_exchange)
if dividends is not None:
dividends = utils.set_df_tz(dividends, interval, tz_exchange)
if capital_gains is not None:
capital_gains = utils.set_df_tz(capital_gains, interval, tz_exchange)
if start is not None:
startDt = quotes.index[0].floor('D')
if dividends is not None:
dividends = dividends.loc[startDt:]
if capital_gains is not None:
capital_gains = capital_gains.loc[startDt:]
if splits is not None:
splits = splits.loc[startDt:]
if end is not None:
endDt = _pd.Timestamp(end, unit='s').tz_localize(tz)
if dividends is not None:
dividends = dividends[dividends.index < endDt]
if capital_gains is not None:
capital_gains = capital_gains[capital_gains.index < endDt]
if splits is not None:
splits = splits[splits.index < endDt]
# Prepare for combine
intraday = params["interval"][-1] in ("m", 'h')
@ -850,6 +393,8 @@ class TickerBase:
else:
df["Capital Gains"] = 0.0
df = df[~df.index.duplicated(keep='first')] # must do before repair
if repair==True or repair=="silent":
# Do this before auto/back adjust
df = self._fix_zeroes(df, interval, tz_exchange, prepost, silent=(repair=="silent"))
@ -882,9 +427,9 @@ class TickerBase:
else:
df.index.name = "Date"
# duplicates and missing rows cleanup
df = df[~df.index.duplicated(keep='first')]
self._history = df.copy()
# missing rows cleanup
if not actions:
df = df.drop(columns=["Dividends", "Stock Splits", "Capital Gains"], errors='ignore')
if not keepna:
@ -943,7 +488,7 @@ class TickerBase:
m -= _datetime.timedelta(days=1) # allow space for 1-day padding
min_dt = _pd.Timestamp.utcnow() - m
min_dt = min_dt.tz_convert(df.index.tz).ceil("D")
logger.debug((f"min_dt={min_dt} interval={interval} sub_interval={sub_interval}"))
logger.debug(f"min_dt={min_dt} interval={interval} sub_interval={sub_interval}")
if min_dt is not None:
f_recent = df.index >= min_dt
f_repair_rows = f_repair_rows & f_recent
@ -959,6 +504,7 @@ class TickerBase:
return df
df_v2 = df.copy()
df_v2["Repaired?"] = False
f_good = ~(df[price_cols].isna().any(axis=1))
f_good = f_good & (df[price_cols].to_numpy()!=tag).all(axis=1)
df_good = df[f_good]
@ -981,7 +527,7 @@ class TickerBase:
grp_max_size = _datetime.timedelta(days=5) # allow 2 days for buffer below
else:
grp_max_size = _datetime.timedelta(days=30)
logger.debug("grp_max_size =", grp_max_size)
logger.debug(f"grp_max_size = {grp_max_size}")
for i in range(1, len(dts_to_repair)):
ind = indices_to_repair[i]
dt = dts_to_repair[i]
@ -1048,6 +594,8 @@ class TickerBase:
if intraday:
fetch_start = fetch_start.date()
fetch_end = fetch_end.date()+td_1d
if min_dt is not None:
fetch_start = max(min_dt.date(), fetch_start)
logger.debug(f"Fetching {sub_interval} prepost={prepost} {fetch_start}->{fetch_end}")
r = "silent" if silent else True
df_fine = self.history(start=fetch_start, end=fetch_end, interval=sub_interval, div_adjust=False, actions=False, prepost=prepost, repair=r, keepna=True)
@ -1056,7 +604,7 @@ class TickerBase:
logger.warning(f"Cannot reconstruct {interval} block starting {start_d}, too old, Yahoo is rejecting request for finer-grain data")
continue
# Discard the buffer
df_fine = df_fine.loc[g[0] : g[-1]+itds[sub_interval]-_datetime.timedelta(milliseconds=1)]
df_fine = df_fine.loc[g[0] : g[-1]+itds[sub_interval]-_datetime.timedelta(milliseconds=1)].copy()
if df_fine.empty:
if not silent:
print("YF: WARNING: Cannot reconstruct because Yahoo not returning data in interval")
@ -1101,14 +649,14 @@ class TickerBase:
common_index = _np.intersect1d(df_block.index, df_new.index)
if len(common_index) == 0:
# Can't calibrate so don't attempt repair
logger.warning("Can't calibrate {interval} block starting {start_d} so aborting repair")
logger.warning(f"Can't calibrate {interval} block starting {start_d} so aborting repair")
continue
df_new_calib = df_new[df_new.index.isin(common_index)][price_cols].to_numpy()
df_block_calib = df_block[df_block.index.isin(common_index)][price_cols].to_numpy()
calib_filter = (df_block_calib != tag)
if not calib_filter.any():
# Can't calibrate so don't attempt repair
logger.warning("Can't calibrate {interval} block starting {start_d} so aborting repair")
logger.warning(f"Can't calibrate {interval} block starting {start_d} so aborting repair")
continue
# Avoid divide-by-zero warnings:
for j in range(len(price_cols)):
@ -1184,6 +732,7 @@ class TickerBase:
df_v2.loc[idx, "Adj Close"] = df_new_row["Adj Close"]
if "Volume" in bad_fields:
df_v2.loc[idx, "Volume"] = df_new_row["Volume"]
df_v2.loc[idx, "Repaired?"] = True
n_fixed += 1
logger.debug("df_v2:")
@ -1303,6 +852,7 @@ class TickerBase:
c = data_cols[j]
df2.loc[fj, c] = df.loc[fj, c]
if df2_zeroes is not None:
df2_zeroes["Repaired?"] = False
df2 = _pd.concat([df2, df2_zeroes]).sort_index()
df2.index = _pd.to_datetime()
@ -1384,6 +934,7 @@ class TickerBase:
logger.info('%s', msg)
if df2_reserve is not None:
df2_reserve["Repaired?"] = False
df3 = _pd.concat([df3, df2_reserve]).sort_index()
# Restore original values where repair failed (i.e. remove tag values)
@ -1561,6 +1112,8 @@ class TickerBase:
@property
def fast_info(self):
if self._fast_info is None:
self._fast_info = FastInfo(self)
return self._fast_info
@property

View File

@ -16,6 +16,8 @@ else:
import requests as requests
import re
from bs4 import BeautifulSoup
import random
import time
from frozendict import frozendict
@ -205,6 +207,11 @@ class TickerData:
proxy = {"https": proxy}
return proxy
def get_raw_json(self, url, user_agent_headers=None, params=None, proxy=None, timeout=30):
response = self.get(url, user_agent_headers=user_agent_headers, params=params, proxy=proxy, timeout=timeout)
response.raise_for_status()
return response.json()
def _get_decryption_keys_from_yahoo_js(self, soup):
result = None

View File

@ -49,11 +49,13 @@ def download(tickers, start=None, end=None, actions=False, threads=True, ignore_
Valid intervals: 1m,2m,5m,15m,30m,60m,90m,1h,1d,5d,1wk,1mo,3mo
Intraday data cannot extend last 60 days
start: str
Download start date string (YYYY-MM-DD) or _datetime.
Download start date string (YYYY-MM-DD) or _datetime, inclusive.
Default is 1900-01-01
E.g. for start="2020-01-01", the first data point will be on "2020-01-01"
end: str
Download end date string (YYYY-MM-DD) or _datetime.
Download end date string (YYYY-MM-DD) or _datetime, exclusive.
Default is now
E.g. for end="2023-01-01", the last data point will be on "2022-12-31"
group_by : str
Group by 'ticker' or 'column' (default)
prepost : bool

View File

@ -22,6 +22,7 @@ info_retired_keys = info_retired_keys_price | info_retired_keys_exchange | info_
PRUNE_INFO = True
# PRUNE_INFO = False
_BASIC_URL_ = "https://query2.finance.yahoo.com/v10/finance/quoteSummary"
from collections.abc import MutableMapping
@ -76,6 +77,462 @@ class InfoDictWrapper(MutableMapping):
return k
class FastInfo:
# Contain small subset of info[] items that can be fetched faster elsewhere.
# Imitates a dict.
def __init__(self, tickerBaseObject):
utils.print_once("Note: 'info' dict is now fixed & improved, 'fast_info' no longer faster")
self._tkr = tickerBaseObject
self._prices_1y = None
self._prices_1wk_1h_prepost = None
self._prices_1wk_1h_reg = None
self._md = None
self._currency = None
self._quote_type = None
self._exchange = None
self._timezone = None
self._shares = None
self._mcap = None
self._open = None
self._day_high = None
self._day_low = None
self._last_price = None
self._last_volume = None
self._prev_close = None
self._reg_prev_close = None
self._50d_day_average = None
self._200d_day_average = None
self._year_high = None
self._year_low = None
self._year_change = None
self._10d_avg_vol = None
self._3mo_avg_vol = None
# attrs = utils.attributes(self)
# self.keys = attrs.keys()
# utils.attributes is calling each method, bad! Have to hardcode
_properties = ["currency", "quote_type", "exchange", "timezone"]
_properties += ["shares", "market_cap"]
_properties += ["last_price", "previous_close", "open", "day_high", "day_low"]
_properties += ["regular_market_previous_close"]
_properties += ["last_volume"]
_properties += ["fifty_day_average", "two_hundred_day_average", "ten_day_average_volume", "three_month_average_volume"]
_properties += ["year_high", "year_low", "year_change"]
# Because released before fixing key case, need to officially support
# camel-case but also secretly support snake-case
base_keys = [k for k in _properties if not '_' in k]
sc_keys = [k for k in _properties if '_' in k]
self._sc_to_cc_key = {k:utils.snake_case_2_camelCase(k) for k in sc_keys}
self._cc_to_sc_key = {v:k for k,v in self._sc_to_cc_key.items()}
self._public_keys = sorted(base_keys + list(self._sc_to_cc_key.values()))
self._keys = sorted(self._public_keys + sc_keys)
# dict imitation:
def keys(self):
return self._public_keys
def items(self):
return [(k,self[k]) for k in self._public_keys]
def values(self):
return [self[k] for k in self._public_keys]
def get(self, key, default=None):
if key in self.keys():
if key in self._cc_to_sc_key:
key = self._cc_to_sc_key[key]
return self[key]
return default
def __getitem__(self, k):
if not isinstance(k, str):
raise KeyError(f"key must be a string")
if not k in self._keys:
raise KeyError(f"'{k}' not valid key. Examine 'FastInfo.keys()'")
if k in self._cc_to_sc_key:
k = self._cc_to_sc_key[k]
return getattr(self, k)
def __contains__(self, k):
return k in self.keys()
def __iter__(self):
return iter(self.keys())
def __str__(self):
return "lazy-loading dict with keys = " + str(self.keys())
def __repr__(self):
return self.__str__()
def toJSON(self, indent=4):
d = {k:self[k] for k in self.keys()}
return _json.dumps({k:self[k] for k in self.keys()}, indent=indent)
def _get_1y_prices(self, fullDaysOnly=False):
if self._prices_1y is None:
self._prices_1y = self._tkr.history(period="380d", auto_adjust=False, debug=False, keepna=True)
self._md = self._tkr.get_history_metadata()
try:
ctp = self._md["currentTradingPeriod"]
self._today_open = pd.to_datetime(ctp["regular"]["start"], unit='s', utc=True).tz_convert(self.timezone)
self._today_close = pd.to_datetime(ctp["regular"]["end"], unit='s', utc=True).tz_convert(self.timezone)
self._today_midnight = self._today_close.ceil("D")
except:
self._today_open = None
self._today_close = None
self._today_midnight = None
raise
if self._prices_1y.empty:
return self._prices_1y
dnow = pd.Timestamp.utcnow().tz_convert(self.timezone).date()
d1 = dnow
d0 = (d1 + datetime.timedelta(days=1)) - utils._interval_to_timedelta("1y")
if fullDaysOnly and self._exchange_open_now():
# Exclude today
d1 -= utils._interval_to_timedelta("1d")
return self._prices_1y.loc[str(d0):str(d1)]
def _get_1wk_1h_prepost_prices(self):
if self._prices_1wk_1h_prepost is None:
self._prices_1wk_1h_prepost = self._tkr.history(period="1wk", interval="1h", auto_adjust=False, prepost=True, debug=False)
return self._prices_1wk_1h_prepost
def _get_1wk_1h_reg_prices(self):
if self._prices_1wk_1h_reg is None:
self._prices_1wk_1h_reg = self._tkr.history(period="1wk", interval="1h", auto_adjust=False, prepost=False, debug=False)
return self._prices_1wk_1h_reg
def _get_exchange_metadata(self):
if self._md is not None:
return self._md
self._get_1y_prices()
self._md = self._tkr.get_history_metadata()
return self._md
def _exchange_open_now(self):
t = pd.Timestamp.utcnow()
self._get_exchange_metadata()
# if self._today_open is None and self._today_close is None:
# r = False
# else:
# r = self._today_open <= t and t < self._today_close
# if self._today_midnight is None:
# r = False
# elif self._today_midnight.date() > t.tz_convert(self.timezone).date():
# r = False
# else:
# r = t < self._today_midnight
last_day_cutoff = self._get_1y_prices().index[-1] + datetime.timedelta(days=1)
last_day_cutoff += datetime.timedelta(minutes=20)
r = t < last_day_cutoff
# print("_exchange_open_now() returning", r)
return r
@property
def currency(self):
if self._currency is not None:
return self._currency
if self._tkr._history_metadata is None:
self._get_1y_prices()
md = self._tkr.get_history_metadata()
self._currency = md["currency"]
return self._currency
@property
def quote_type(self):
if self._quote_type is not None:
return self._quote_type
if self._tkr._history_metadata is None:
self._get_1y_prices()
md = self._tkr.get_history_metadata()
self._quote_type = md["instrumentType"]
return self._quote_type
@property
def exchange(self):
if self._exchange is not None:
return self._exchange
self._exchange = self._get_exchange_metadata()["exchangeName"]
return self._exchange
@property
def timezone(self):
if self._timezone is not None:
return self._timezone
self._timezone = self._get_exchange_metadata()["exchangeTimezoneName"]
return self._timezone
@property
def shares(self):
if self._shares is not None:
return self._shares
shares = self._tkr.get_shares_full(start=pd.Timestamp.utcnow().date()-pd.Timedelta(days=548))
if shares is None:
# Requesting 18 months failed, so fallback to shares which should include last year
shares = self._tkr.get_shares()
if shares is not None:
if isinstance(shares, pd.DataFrame):
shares = shares[shares.columns[0]]
self._shares = int(shares.iloc[-1])
return self._shares
@property
def last_price(self):
if self._last_price is not None:
return self._last_price
prices = self._get_1y_prices()
if prices.empty:
md = self._get_exchange_metadata()
if "regularMarketPrice" in md:
self._last_price = md["regularMarketPrice"]
else:
self._last_price = float(prices["Close"].iloc[-1])
if _np.isnan(self._last_price):
md = self._get_exchange_metadata()
if "regularMarketPrice" in md:
self._last_price = md["regularMarketPrice"]
return self._last_price
@property
def previous_close(self):
if self._prev_close is not None:
return self._prev_close
prices = self._get_1wk_1h_prepost_prices()
fail = False
if prices.empty:
fail = True
else:
prices = prices[["Close"]].groupby(prices.index.date).last()
if prices.shape[0] < 2:
# Very few symbols have previousClose despite no
# no trading data e.g. 'QCSTIX'.
fail = True
else:
self._prev_close = float(prices["Close"].iloc[-2])
if fail:
# Fallback to original info[] if available.
self._tkr.info # trigger fetch
k = "previousClose"
if self._tkr._quote._retired_info is not None and k in self._tkr._quote._retired_info:
self._prev_close = self._tkr._quote._retired_info[k]
return self._prev_close
@property
def regular_market_previous_close(self):
if self._reg_prev_close is not None:
return self._reg_prev_close
prices = self._get_1y_prices()
if prices.shape[0] == 1:
# Tiny % of tickers don't return daily history before last trading day,
# so backup option is hourly history:
prices = self._get_1wk_1h_reg_prices()
prices = prices[["Close"]].groupby(prices.index.date).last()
if prices.shape[0] < 2:
# Very few symbols have regularMarketPreviousClose despite no
# no trading data. E.g. 'QCSTIX'.
# So fallback to original info[] if available.
self._tkr.info # trigger fetch
k = "regularMarketPreviousClose"
if self._tkr._quote._retired_info is not None and k in self._tkr._quote._retired_info:
self._reg_prev_close = self._tkr._quote._retired_info[k]
else:
self._reg_prev_close = float(prices["Close"].iloc[-2])
return self._reg_prev_close
@property
def open(self):
if self._open is not None:
return self._open
prices = self._get_1y_prices()
if prices.empty:
self._open = None
else:
self._open = float(prices["Open"].iloc[-1])
if _np.isnan(self._open):
self._open = None
return self._open
@property
def day_high(self):
if self._day_high is not None:
return self._day_high
prices = self._get_1y_prices()
if prices.empty:
self._day_high = None
else:
self._day_high = float(prices["High"].iloc[-1])
if _np.isnan(self._day_high):
self._day_high = None
return self._day_high
@property
def day_low(self):
if self._day_low is not None:
return self._day_low
prices = self._get_1y_prices()
if prices.empty:
self._day_low = None
else:
self._day_low = float(prices["Low"].iloc[-1])
if _np.isnan(self._day_low):
self._day_low = None
return self._day_low
@property
def last_volume(self):
if self._last_volume is not None:
return self._last_volume
prices = self._get_1y_prices()
self._last_volume = None if prices.empty else int(prices["Volume"].iloc[-1])
return self._last_volume
@property
def fifty_day_average(self):
if self._50d_day_average is not None:
return self._50d_day_average
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.empty:
self._50d_day_average = None
else:
n = prices.shape[0]
a = n-50
b = n
if a < 0:
a = 0
self._50d_day_average = float(prices["Close"].iloc[a:b].mean())
return self._50d_day_average
@property
def two_hundred_day_average(self):
if self._200d_day_average is not None:
return self._200d_day_average
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.empty:
self._200d_day_average = None
else:
n = prices.shape[0]
a = n-200
b = n
if a < 0:
a = 0
self._200d_day_average = float(prices["Close"].iloc[a:b].mean())
return self._200d_day_average
@property
def ten_day_average_volume(self):
if self._10d_avg_vol is not None:
return self._10d_avg_vol
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.empty:
self._10d_avg_vol = None
else:
n = prices.shape[0]
a = n-10
b = n
if a < 0:
a = 0
self._10d_avg_vol = int(prices["Volume"].iloc[a:b].mean())
return self._10d_avg_vol
@property
def three_month_average_volume(self):
if self._3mo_avg_vol is not None:
return self._3mo_avg_vol
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.empty:
self._3mo_avg_vol = None
else:
dt1 = prices.index[-1]
dt0 = dt1 - utils._interval_to_timedelta("3mo") + utils._interval_to_timedelta("1d")
self._3mo_avg_vol = int(prices.loc[dt0:dt1, "Volume"].mean())
return self._3mo_avg_vol
@property
def year_high(self):
if self._year_high is not None:
return self._year_high
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.empty:
prices = self._get_1y_prices(fullDaysOnly=False)
self._year_high = float(prices["High"].max())
return self._year_high
@property
def year_low(self):
if self._year_low is not None:
return self._year_low
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.empty:
prices = self._get_1y_prices(fullDaysOnly=False)
self._year_low = float(prices["Low"].min())
return self._year_low
@property
def year_change(self):
if self._year_change is not None:
return self._year_change
prices = self._get_1y_prices(fullDaysOnly=True)
if prices.shape[0] >= 2:
self._year_change = (prices["Close"].iloc[-1] - prices["Close"].iloc[0]) / prices["Close"].iloc[0]
self._year_change = float(self._year_change)
return self._year_change
@property
def market_cap(self):
if self._mcap is not None:
return self._mcap
try:
shares = self.shares
except Exception as e:
if "Cannot retrieve share count" in str(e):
shares = None
else:
raise
if shares is None:
# Very few symbols have marketCap despite no share count.
# E.g. 'BTC-USD'
# So fallback to original info[] if available.
self._tkr.info
k = "marketCap"
if self._tkr._quote._retired_info is not None and k in self._tkr._quote._retired_info:
self._mcap = self._tkr._quote._retired_info[k]
else:
self._mcap = float(shares * self.last_price)
return self._mcap
class Quote:
@ -90,13 +547,16 @@ class Quote:
self._calendar = None
self._already_scraped = False
self._already_scraped_complementary = False
self._already_fetched = False
self._already_fetched_complementary = False
@property
def info(self) -> dict:
if self._info is None:
self._scrape(self.proxy)
self._scrape_complementary(self.proxy)
# self._scrape(self.proxy) # decrypt broken
self._fetch(self.proxy)
self._fetch_complementary(self.proxy)
return self._info
@ -239,12 +699,56 @@ class Quote:
except Exception:
pass
def _scrape_complementary(self, proxy):
if self._already_scraped_complementary:
def _fetch(self, proxy):
if self._already_fetched:
return
self._already_scraped_complementary = True
self._already_fetched = True
modules = ['summaryProfile', 'financialData', 'quoteType',
'defaultKeyStatistics', 'assetProfile', 'summaryDetail']
result = self._data.get_raw_json(
_BASIC_URL_ + f"/{self._data.ticker}", params={"modules": ",".join(modules), "ssl": "true"}, proxy=proxy
)
result["quoteSummary"]["result"][0]["symbol"] = self._data.ticker
query1_info = next(
(info for info in result.get("quoteSummary", {}).get("result", []) if info["symbol"] == self._data.ticker),
None,
)
# Most keys that appear in multiple dicts have same value. Except 'maxAge' because
# Yahoo not consistent with days vs seconds. Fix it here:
for k in query1_info:
if "maxAge" in query1_info[k] and query1_info[k]["maxAge"] == 1:
query1_info[k]["maxAge"] = 86400
query1_info = {
k1: v1
for k, v in query1_info.items()
if isinstance(v, dict)
for k1, v1 in v.items()
if v1
}
# recursively format but only because of 'companyOfficers'
def _format(k, v):
if isinstance(v, dict) and "raw" in v and "fmt" in v:
v2 = v["fmt"] if k in {"regularMarketTime", "postMarketTime"} else v["raw"]
elif isinstance(v, list):
v2 = [_format(None, x) for x in v]
elif isinstance(v, dict):
v2 = {k:_format(k, x) for k, x in v.items()}
elif isinstance(v, str):
v2 = v.replace("\xa0", " ")
else:
v2 = v
return v2
for k, v in query1_info.items():
query1_info[k] = _format(k, v)
self._info = query1_info
self._scrape(proxy)
def _fetch_complementary(self, proxy):
if self._already_fetched_complementary:
return
self._already_fetched_complementary = True
# self._scrape(proxy) # decrypt broken
self._fetch(proxy)
if self._info is None:
return

View File

@ -5,3 +5,4 @@ e9a8ab8e5620b712ebc2fb4f33d5c8b9c80c0d07e8c371911c785cf674789f1747d76a909510158a
3365117c2a368ffa5df7313a4a84988f73926a86358e8eea9497c5ff799ce27d104b68e5f2fbffa6f8f92c1fef41765a7066fa6bcf050810a9c4c7872fd3ebf0
15d8f57919857d5a5358d2082c7ef0f1129cfacd2a6480333dcfb954b7bb67d820abefebfdb0eaa6ef18a1c57f617b67d7e7b0ec040403b889630ae5db5a4dbb
db9630d707a7d0953ac795cd8db1ca9ca6c9d8239197cdfda24b4e0ec9c37eaec4db82dab68b8f606ab7b5b4af3e65dab50606f8cf508269ec927e6ee605fb78
3c895fb5ddcc37d20d3073ed74ee3efad59bcb147c8e80fd279f83701b74b092d503dcd399604c6d8be8f3013429d3c2c76ed5b31b80c9df92d5eab6d3339fce

View File

@ -87,10 +87,4 @@ class Tickers:
return data
def news(self):
collection = {}
for ticker in self.symbols:
collection[ticker] = []
items = Ticker(ticker).news
for item in items:
collection[ticker].append(item)
return collection
return {ticker: [item for item in Ticker(ticker).news] for ticker in self.symbols}

View File

@ -497,16 +497,24 @@ def fix_Yahoo_returning_live_separate(quotes, interval, tz_exchange):
# Last two rows are within same interval
idx1 = quotes.index[n - 1]
idx2 = quotes.index[n - 2]
if idx1 == idx2:
# Yahoo returning last interval duplicated, which means
# Yahoo is not returning live data (phew!)
return quotes
if _np.isnan(quotes.loc[idx2, "Open"]):
quotes.loc[idx2, "Open"] = quotes["Open"][n - 1]
# Note: nanmax() & nanmin() ignores NaNs
quotes.loc[idx2, "High"] = _np.nanmax([quotes["High"][n - 1], quotes["High"][n - 2]])
quotes.loc[idx2, "Low"] = _np.nanmin([quotes["Low"][n - 1], quotes["Low"][n - 2]])
# Note: nanmax() & nanmin() ignores NaNs, but still need to check not all are NaN to avoid warnings
if not _np.isnan(quotes["High"][n - 1]):
quotes.loc[idx2, "High"] = _np.nanmax([quotes["High"][n - 1], quotes["High"][n - 2]])
if "Adj High" in quotes.columns:
quotes.loc[idx2, "Adj High"] = _np.nanmax([quotes["Adj High"][n - 1], quotes["Adj High"][n - 2]])
if not _np.isnan(quotes["Low"][n - 1]):
quotes.loc[idx2, "Low"] = _np.nanmin([quotes["Low"][n - 1], quotes["Low"][n - 2]])
if "Adj Low" in quotes.columns:
quotes.loc[idx2, "Adj Low"] = _np.nanmin([quotes["Adj Low"][n - 1], quotes["Adj Low"][n - 2]])
quotes.loc[idx2, "Close"] = quotes["Close"][n - 1]
if "Adj High" in quotes.columns:
quotes.loc[idx2, "Adj High"] = _np.nanmax([quotes["Adj High"][n - 1], quotes["Adj High"][n - 2]])
if "Adj Low" in quotes.columns:
quotes.loc[idx2, "Adj Low"] = _np.nanmin([quotes["Adj Low"][n - 1], quotes["Adj Low"][n - 2]])
if "Adj Close" in quotes.columns:
quotes.loc[idx2, "Adj Close"] = quotes["Adj Close"][n - 1]
quotes.loc[idx2, "Volume"] += quotes["Volume"][n - 1]
@ -538,7 +546,7 @@ def safe_merge_dfs(df_main, df_sub, interval):
df["_NewIndex"] = new_index
# Duplicates present within periods but can aggregate
if data_col_name == "Dividends":
if data_col_name in ["Dividends", "Capital Gains"]:
# Add
df = df.groupby("_NewIndex").sum()
df.index.name = None
@ -713,15 +721,17 @@ def format_history_metadata(md):
# Includes pre- and post-market
pre_df = _pd.DataFrame.from_records(_np.hstack(tps["pre"]))
post_df = _pd.DataFrame.from_records(_np.hstack(tps["post"]))
regular_df = _pd.DataFrame.from_records(_np.hstack(tps["post"]))
regular_df = _pd.DataFrame.from_records(_np.hstack(tps["regular"]))
pre_df = pre_df.rename(columns={"start":"pre_start", "end":"pre_end"}).drop(["timezone", "gmtoffset"], axis=1)
post_df = post_df.rename(columns={"start":"post_start", "end":"post_end"}).drop(["timezone", "gmtoffset"], axis=1)
regular_df = regular_df.drop(["timezone", "gmtoffset"], axis=1)
cols = ["pre_start", "pre_end", "start", "end", "post_start", "post_end"]
df = regular_df.join(pre_df).join(post_df)
for c in ["start", "end", "pre_start", "pre_end", "post_start", "post_end"]:
for c in cols:
df[c] = _pd.to_datetime(df[c], unit='s', utc=True).dt.tz_convert(tz)
df = df[cols]
else:
raise Exception()

View File

@ -1 +1 @@
version = "0.2.13b1"
version = "0.2.16"