Because of this change, values of the affected types need to be zero-initialized with the constant 0 instead of the constant nil. Go 1.10 provides gofix modules to help with that rewrite:

go tool fix -r cftype <pkg>
go tool fix -r jni <pkg>
The go doc tool now adds functions returning slices of T or *T to the display of type T, similar to the existing behavior for functions returning single T or *T results. For example:

$ go doc mail.Address
package mail // import "net/mail"

type Address struct {
	Name    string
	Address string
}
    Address represents a single mail address.

func ParseAddress(address string) (*Address, error)
func ParseAddressList(list string) ([]*Address, error)
func (a *Address) String() string
$
More functions are now eligible for inlining by default, including functions that do nothing but call another function. This extra inlining makes it additionally important to use runtime.CallersFrames instead of iterating over the result of runtime.Callers directly.

// Old code which no longer works correctly (it will miss inlined call frames).
var pcs [10]uintptr
n := runtime.Callers(1, pcs[:])
for _, pc := range pcs[:n] {
	f := runtime.FuncForPC(pc)
	if f != nil {
		fmt.Println(f.Name())
	}
}

// New code which will work correctly.
var pcs [10]uintptr
n := runtime.Callers(1, pcs[:])
frames := runtime.CallersFrames(pcs[:n])
for {
	frame, more := frames.Next()
	fmt.Println(frame.Function)
	if !more {
		break
	}
}

My bad Linux Mint 18.1.3 use Ubuntu 16.04 list not 17.10 or 17.04:


perret@perret-ThinkPad-E460 ~ $ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   983  100   983    0     0    830      0  0:00:01  0:00:01 --:--:--   830
perret@perret-ThinkPad-E460 ~ $ sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
[sudo] password for perret: 
Sorry, try again.
[sudo] password for perret: 
perret@perret-ThinkPad-E460 ~ $ sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
perret@perret-ThinkPad-E460 ~ $ sudo apt-get update
Hit:1 http://packages.microsoft.com/repos/vscode stable InRelease              
Hit:2 http://security.ubuntu.com/ubuntu xenial-security InRelease              
Ign:3 http://mirrors.evowise.com/linuxmint/packages sylvia InRelease           
Hit:4 http://archive.canonical.com/ubuntu xenial InRelease                     
Hit:5 http://mirrors.evowise.com/linuxmint/packages sylvia Release             
Hit:6 http://mirror.clibre.uqam.ca/ubuntu xenial InRelease              
Hit:8 https://repo.skype.com/deb stable InRelease  
Hit:9 https://deb.opera.com/opera-stable stable InRelease
Hit:10 http://mirror.clibre.uqam.ca/ubuntu xenial-updates InRelease
Get:11 https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial InRelease [2,845 B]
Hit:12 http://mirror.clibre.uqam.ca/ubuntu xenial-backports InRelease          
Get:13 https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial/main amd64 Packages [23.3 kB]
Fetched 26.1 kB in 1s (18.7 kB/s)    
Reading package lists... Done
perret@perret-ThinkPad-E460 ~ $ sudo apt-get install dotnet-sdk-2.1.3
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  aspnetcore-store-2.0.0 aspnetcore-store-2.0.3 dotnet-host
  dotnet-hostfxr-2.0.4 dotnet-runtime-2.0.4 liblttng-ust-ctl2 liblttng-ust0
  liburcu4
The following NEW packages will be installed:
  aspnetcore-store-2.0.0 aspnetcore-store-2.0.3 dotnet-host
  dotnet-hostfxr-2.0.4 dotnet-runtime-2.0.4 dotnet-sdk-2.1.3 liblttng-ust-ctl2
  liblttng-ust0 liburcu4
0 upgraded, 9 newly installed, 0 to remove and 0 not upgraded.
Need to get 118 MB of archives.
After this operation, 344 MB of additional disk space will be used.
Do you want to continue? [Y/n] n
Abort.
perret@perret-ThinkPad-E460 ~ $ sudo apt-get install dotnet-sdk-2.1.4
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  aspnetcore-store-2.0.0 aspnetcore-store-2.0.3 aspnetcore-store-2.0.5
  dotnet-host dotnet-hostfxr-2.0.5 dotnet-runtime-2.0.5 liblttng-ust-ctl2
  liblttng-ust0 liburcu4
The following NEW packages will be installed:
  aspnetcore-store-2.0.0 aspnetcore-store-2.0.3 aspnetcore-store-2.0.5
  dotnet-host dotnet-hostfxr-2.0.5 dotnet-runtime-2.0.5 dotnet-sdk-2.1.4
  liblttng-ust-ctl2 liblttng-ust0 liburcu4
0 upgraded, 10 newly installed, 0 to remove and 0 not upgraded.
Need to get 119 MB of archives.
After this operation, 348 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial/main amd64 aspnetcore-store-2.0.0 amd64 2.0.0-1 [18.1 MB]
Get:2 http://mirror.clibre.uqam.ca/ubuntu xenial/universe amd64 liburcu4 amd64 0.9.1-3 [47.3 kB]
Get:3 http://mirror.clibre.uqam.ca/ubuntu xenial/universe amd64 liblttng-ust-ctl2 amd64 2.7.1-1 [72.2 kB]
Get:4 http://mirror.clibre.uqam.ca/ubuntu xenial/universe amd64 liblttng-ust0 amd64 2.7.1-1 [127 kB]
Get:5 https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial/main amd64 aspnetcore-store-2.0.3 amd64 2.0.3-1 [5,805 kB]
Get:6 https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial/main amd64 aspnetcore-store-2.0.5 amd64 2.0.5-1 [1,162 kB]
Get:7 https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial/main amd64 dotnet-host amd64 2.0.5-1 [33.8 kB]
Get:8 https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial/main amd64 dotnet-hostfxr-2.0.5 amd64 2.0.5-1 [135 kB]
Get:9 https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial/main amd64 dotnet-runtime-2.0.5 amd64 2.0.5-1 [18.6 MB]
Get:10 https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial/main amd64 dotnet-sdk-2.1.4 amd64 2.1.4-1 [74.9 MB]
Fetched 119 MB in 51s (2,313 kB/s)                                                                         
Selecting previously unselected package aspnetcore-store-2.0.0.
(Reading database ... 249396 files and directories currently installed.)
Preparing to unpack .../aspnetcore-store-2.0.0_2.0.0-1_amd64.deb ...
Unpacking aspnetcore-store-2.0.0 (2.0.0-1) ...
Selecting previously unselected package aspnetcore-store-2.0.3.
Preparing to unpack .../aspnetcore-store-2.0.3_2.0.3-1_amd64.deb ...
Unpacking aspnetcore-store-2.0.3 (2.0.3-1) ...
Selecting previously unselected package aspnetcore-store-2.0.5.
Preparing to unpack .../aspnetcore-store-2.0.5_2.0.5-1_amd64.deb ...
Unpacking aspnetcore-store-2.0.5 (2.0.5-1) ...
Selecting previously unselected package dotnet-host.
Preparing to unpack .../dotnet-host_2.0.5-1_amd64.deb ...
Unpacking dotnet-host (2.0.5-1) ...
Selecting previously unselected package dotnet-hostfxr-2.0.5.
Preparing to unpack .../dotnet-hostfxr-2.0.5_2.0.5-1_amd64.deb ...
Unpacking dotnet-hostfxr-2.0.5 (2.0.5-1) ...
Selecting previously unselected package liburcu4:amd64.
Preparing to unpack .../liburcu4_0.9.1-3_amd64.deb ...
Unpacking liburcu4:amd64 (0.9.1-3) ...
Selecting previously unselected package liblttng-ust-ctl2:amd64.
Preparing to unpack .../liblttng-ust-ctl2_2.7.1-1_amd64.deb ...
Unpacking liblttng-ust-ctl2:amd64 (2.7.1-1) ...
Selecting previously unselected package liblttng-ust0:amd64.
Preparing to unpack .../liblttng-ust0_2.7.1-1_amd64.deb ...
Unpacking liblttng-ust0:amd64 (2.7.1-1) ...
Selecting previously unselected package dotnet-runtime-2.0.5.
Preparing to unpack .../dotnet-runtime-2.0.5_2.0.5-1_amd64.deb ...
Unpacking dotnet-runtime-2.0.5 (2.0.5-1) ...
Selecting previously unselected package dotnet-sdk-2.1.4.
Preparing to unpack .../dotnet-sdk-2.1.4_2.1.4-1_amd64.deb ...
Unpacking dotnet-sdk-2.1.4 (2.1.4-1) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for libc-bin (2.23-0ubuntu10) ...
Setting up aspnetcore-store-2.0.0 (2.0.0-1) ...
Setting up aspnetcore-store-2.0.3 (2.0.3-1) ...
Setting up aspnetcore-store-2.0.5 (2.0.5-1) ...
Setting up dotnet-host (2.0.5-1) ...
Setting up dotnet-hostfxr-2.0.5 (2.0.5-1) ...
Setting up liburcu4:amd64 (0.9.1-3) ...
Setting up liblttng-ust-ctl2:amd64 (2.7.1-1) ...
Setting up liblttng-ust0:amd64 (2.7.1-1) ...
Setting up dotnet-runtime-2.0.5 (2.0.5-1) ...
Setting up dotnet-sdk-2.1.4 (2.1.4-1) ...
This software may collect information about you and your use of the software, and send that to Microsoft.
Please visit http://aka.ms/dotnet-cli-eula for more information.
Welcome to .NET Core!
---------------------
Learn more about .NET Core @ https://aka.ms/dotnet-docs. Use dotnet --help to see available commands or go to https://aka.ms/dotnet-cli-docs.

.NET Core Tools Telemetry
--------------
The .NET Core Tools include a telemetry feature that collects usage information. It is important that the .NET Team understands how the tools are being used so that we can improve them.

The data collected is anonymous and will be published in an aggregated form for use by both Microsoft and community engineers under the Creative Commons Attribution License.

The .NET Core Tools telemetry feature is enabled by default. You can opt-out of the telemetry feature by setting an environment variable DOTNET_CLI_TELEMETRY_OPTOUT (for example, 'export' on macOS/Linux, 'set' on Windows) to true (for example, 'true', 1). You can read more about .NET Core tools telemetry at https://aka.ms/dotnet-cli-telemetry.

Installation Note
--------------
A command will be run during the install process that will improve project restore speed and enable offline access. It will take up to a minute to complete.
Processing triggers for libc-bin (2.23-0ubuntu10) ...
Shortcuts

import difflib
import os
import io
import shutil
import struct
import sys
import torch
import tarfile
import tempfile
import warnings
from contextlib import closing, contextmanager
from ._utils import _import_dotted_name
from ._six import string_classes as _string_classes
from torch._sources import get_source_lines_and_file
from torch.types import Storage
from typing import Any, BinaryIO, cast, Dict, Optional, Type, Tuple, Union, IO
import copyreg
import pickle
import pathlib

DEFAULT_PROTOCOL = 2

LONG_SIZE = struct.Struct('=l').size
INT_SIZE = struct.Struct('=i').size
SHORT_SIZE = struct.Struct('=h').size

MAGIC_NUMBER = 0x1950a86a20f9469cfc6c
PROTOCOL_VERSION = 1001
STORAGE_KEY_SEPARATOR = ','

class SourceChangeWarning(Warning):
    pass


@contextmanager
def mkdtemp():
    path = tempfile.mkdtemp()
    yield path
    shutil.rmtree(path)


_package_registry = []


def _is_zipfile(f) -> bool:
    # This is a stricter implementation than zipfile.is_zipfile().
    # zipfile.is_zipfile() is True if the magic number appears anywhere in the
    # binary. Since we expect the files here to be generated by torch.save or
    # torch.jit.save, it's safe to only check the start bytes and avoid
    # collisions and assume the zip has only 1 file.
    # See bugs.python.org/issue28494.

    # Read the first 4 bytes of the file
    read_bytes = []
    start = f.tell()

    byte = f.read(1)
    while byte != "":
        read_bytes.append(byte)
        if len(read_bytes) == 4:
            break
        byte = f.read(1)
    f.seek(start)

    local_header_magic_number = [b'P', b'K', b'\x03', b'\x04']
    return read_bytes == local_header_magic_number


def register_package(priority, tagger, deserializer):
    queue_elem = (priority, tagger, deserializer)
    _package_registry.append(queue_elem)
    _package_registry.sort()


def check_module_version_greater_or_equal(module, req_version_tuple, error_if_malformed=True):
    '''
    Check if a module's version satisfies requirements

    Usually, a module's version string will be like 'x.y.z', which would be represented
    as a tuple (x, y, z), but sometimes it could be an unexpected format. If the version
    string does not match the given tuple's format up to the length of the tuple, then
    error and exit or emit a warning.

    Args:
        module: the module to check the version of
        req_version_tuple: tuple (usually of ints) representing the required version
        error_if_malformed: whether we should exit if module version string is malformed

    Returns:
        requirement_is_met: bool
    '''
    try:
        version_strs = module.__version__.split('.')
        # Cast module version fields to match the types of the required version
        module_version = tuple(
            type(req_field)(version_strs[idx]) for idx, req_field in enumerate(req_version_tuple)
        )
        requirement_is_met = module_version >= req_version_tuple

    except Exception as e:
        message = (
            "'%s' module version string is malformed '%s' and cannot be compared"
            " with tuple %s"
        ) % (
            module.__name__, module.__version__, str(req_version_tuple)
        )
        if error_if_malformed:
            raise RuntimeError(message) from e
        else:
            warnings.warn(message + ', but continuing assuming that requirement is met')
            requirement_is_met = True

    return requirement_is_met


def _cpu_tag(obj):
    if type(obj).__module__ == 'torch':
        return 'cpu'


def _cuda_tag(obj):
    if type(obj).__module__ == 'torch.cuda':
        return 'cuda:' + str(obj.get_device())


def _cpu_deserialize(obj, location):
    if location == 'cpu':
        return obj


def validate_cuda_device(location):
    device = torch.cuda._utils._get_device_index(location, True)

    if not torch.cuda.is_available():
        raise RuntimeError('Attempting to deserialize object on a CUDA '
                           'device but torch.cuda.is_available() is False. '
                           'If you are running on a CPU-only machine, '
                           'please use torch.load with map_location=torch.device(\'cpu\') '
                           'to map your storages to the CPU.')
    device_count = torch.cuda.device_count()
    if device >= device_count:
        raise RuntimeError('Attempting to deserialize object on CUDA device '
                           f'{device} but torch.cuda.device_count() is {device_count}. Please use '
                           'torch.load with map_location to map your storages '
                           'to an existing device.')
    return device


def _cuda_deserialize(obj, location):
    if location.startswith('cuda'):
        device = validate_cuda_device(location)
        if getattr(obj, "_torch_load_uninitialized", False):
            storage_type = getattr(torch.cuda, type(obj).__name__)
            with torch.cuda.device(device):
                return storage_type(obj.size())
        else:
            return obj.cuda(device)


register_package(10, _cpu_tag, _cpu_deserialize)
register_package(20, _cuda_tag, _cuda_deserialize)


def location_tag(storage: Storage):
    for _, tagger, _ in _package_registry:
        location = tagger(storage)
        if location:
            return location
    raise RuntimeError("don't know how to determine data location of "
                       + torch.typename(storage))


def default_restore_location(storage, location):
    for _, _, fn in _package_registry:
        result = fn(storage, location)
        if result is not None:
            return result
    raise RuntimeError("don't know how to restore data location of "
                       + torch.typename(storage) + " (tagged with "
                       + location + ")")


def normalize_storage_type(storage_type):
    return getattr(torch, storage_type.__name__)


def storage_to_tensor_type(storage):
    storage_type = type(storage)
    module = _import_dotted_name(storage_type.__module__)
    return getattr(module, storage_type.__name__.replace('Storage', 'Tensor'))


def _is_path(name_or_buffer):
    return isinstance(name_or_buffer, str) or \
        isinstance(name_or_buffer, pathlib.Path)


class _opener(object):
    def __init__(self, file_like):
        self.file_like = file_like

    def __enter__(self):
        return self.file_like

    def __exit__(self, *args):
        pass


class _open_file(_opener):
    def __init__(self, name, mode):
        super(_open_file, self).__init__(open(name, mode))

    def __exit__(self, *args):
        self.file_like.close()


class _open_buffer_reader(_opener):
    def __init__(self, buffer):
        super(_open_buffer_reader, self).__init__(buffer)
        _check_seekable(buffer)


class _open_buffer_writer(_opener):
    def __exit__(self, *args):
        self.file_like.flush()


def _open_file_like(name_or_buffer, mode):
    if _is_path(name_or_buffer):
        return _open_file(name_or_buffer, mode)
    else:
        if 'w' in mode:
            return _open_buffer_writer(name_or_buffer)
        elif 'r' in mode:
            return _open_buffer_reader(name_or_buffer)
        else:
            raise RuntimeError(f"Expected 'r' or 'w' in mode but got {mode}")


class _open_zipfile_reader(_opener):
    def __init__(self, name_or_buffer) -> None:
        super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))


class _open_zipfile_writer_file(_opener):
    def __init__(self, name) -> None:
        super(_open_zipfile_writer_file, self).__init__(torch._C.PyTorchFileWriter(str(name)))

    def __exit__(self, *args) -> None:
        self.file_like.write_end_of_file()


class _open_zipfile_writer_buffer(_opener):
    def __init__(self, buffer) -> None:
        self.buffer = buffer
        super(_open_zipfile_writer_buffer, self).__init__(torch._C.PyTorchFileWriter(buffer))

    def __exit__(self, *args) -> None:
        self.file_like.write_end_of_file()
        self.buffer.flush()


def _open_zipfile_writer(name_or_buffer):
    container: Type[_opener]
    if _is_path(name_or_buffer):
        container = _open_zipfile_writer_file
    else:
        container = _open_zipfile_writer_buffer
    return container(name_or_buffer)


def _is_compressed_file(f) -> bool:
    compress_modules = ['gzip']
    try:
        return f.__module__ in compress_modules
    except AttributeError:
        return False


def _should_read_directly(f):
    """
    Checks if f is a file that should be read directly. It should be read
    directly if it is backed by a real file (has a fileno) and is not a
    a compressed file (e.g. gzip)
    """
    if _is_compressed_file(f):
        return False
    try:
        return f.fileno() >= 0
    except io.UnsupportedOperation:
        return False
    except AttributeError:
        return False


def _check_seekable(f) -> bool:

    def raise_err_msg(patterns, e):
        for p in patterns:
            if p in str(e):
                msg = (str(e) + ". You can only torch.load from a file that is seekable."
                                + " Please pre-load the data into a buffer like io.BytesIO and"
                                + " try to load from it instead.")
                raise type(e)(msg)
        raise e

    try:
        f.seek(f.tell())
        return True
    except (io.UnsupportedOperation, AttributeError) as e:
        raise_err_msg(["seek", "tell"], e)
    return False

def _check_dill_version(pickle_module) -> None:
    '''Checks if using dill as the pickle module, and if so, checks if it is the correct version.
    If dill version is lower than 0.3.1, a ValueError is raised.

    Args:
        pickle_module: module used for pickling metadata and objects

    '''
    if pickle_module.__name__ == 'dill':
        required_dill_version = (0, 3, 1)
        if not check_module_version_greater_or_equal(pickle_module, required_dill_version, False):
            raise ValueError((
                "'torch' supports dill >= %s, but you have dill %s."
                " Please upgrade dill or switch to 'pickle'"
            ) % (
                '.'.join([str(num) for num in required_dill_version]),
                pickle_module.__version__
            ))

[docs]def save(obj, f: Union[str, os.PathLike, BinaryIO, IO[bytes]],
         pickle_module=pickle, pickle_protocol=DEFAULT_PROTOCOL, _use_new_zipfile_serialization=True) -> None:
    # Reference: https://github.com/pytorch/pytorch/issues/54354
    # The first line of this docstring overrides the one Sphinx generates for the
    # documentation. We need it so that Sphinx doesn't leak `pickle`s path from
    # the build environment (e.g. `<module 'pickle' from '/leaked/path').

    """save(obj, f, pickle_module=pickle, pickle_protocol=DEFAULT_PROTOCOL, _use_new_zipfile_serialization=True)

    Saves an object to a disk file.

    See also: :ref:`saving-loading-tensors`

    Args:
        obj: saved object
        f: a file-like object (has to implement write and flush) or a string or
           os.PathLike object containing a file name
        pickle_module: module used for pickling metadata and objects
        pickle_protocol: can be specified to override the default protocol

    .. note::
        A common PyTorch convention is to save tensors using .pt file extension.

    .. note::
        PyTorch preserves storage sharing across serialization. See
        :ref:`preserve-storage-sharing` for more details.

    .. note::
        The 1.6 release of PyTorch switched ``torch.save`` to use a new
        zipfile-based file format. ``torch.load`` still retains the ability to
        load files in the old format. If for any reason you want ``torch.save``
        to use the old format, pass the kwarg ``_use_new_zipfile_serialization=False``.

    Example:
        >>> # Save to file
        >>> x = torch.tensor([0, 1, 2, 3, 4])
        >>> torch.save(x, 'tensor.pt')
        >>> # Save to io.BytesIO buffer
        >>> buffer = io.BytesIO()
        >>> torch.save(x, buffer)
    """
    _check_dill_version(pickle_module)

    with _open_file_like(f, 'wb') as opened_file:
        if _use_new_zipfile_serialization:
            with _open_zipfile_writer(opened_file) as opened_zipfile:
                _save(obj, opened_zipfile, pickle_module, pickle_protocol)
                return
        _legacy_save(obj, opened_file, pickle_module, pickle_protocol)


def _legacy_save(obj, f, pickle_module, pickle_protocol) -> None:
    import torch.nn as nn
    serialized_container_types = {}
    serialized_storages = {}

    def persistent_id(obj: Any) -> Optional[Tuple]:
        # FIXME: the docs say that persistent_id should only return a string
        # but torch store returns tuples. This works only in the binary protocol
        # see
        # https://docs.python.org/2/library/pickle.html#pickling-and-unpickling-external-objects
        # https://github.com/python/cpython/blob/master/Lib/pickle.py#L527-L537
        if isinstance(obj, type) and issubclass(obj, nn.Module):
            if obj in serialized_container_types:
                return None
            serialized_container_types[obj] = True
            source_file = source = None
            try:
                source_lines, _, source_file = get_source_lines_and_file(obj)
                source = ''.join(source_lines)
            except Exception:  # saving the source is optional, so we can ignore any errors
                warnings.warn("Couldn't retrieve source code for container of "
                              "type " + obj.__name__ + ". It won't be checked "
                              "for correctness upon loading.")
            return ('module', obj, source_file, source)

        elif torch.is_storage(obj):
            view_metadata: Optional[Tuple[str, int, int]]
            obj = cast(Storage, obj)
            storage_type = normalize_storage_type(type(obj))
            # Offset is always 0, but we keep it for backwards compatibility
            # with the old serialization format (which supported storage views)
            offset = 0
            obj_key = str(obj._cdata)
            location = location_tag(obj)
            serialized_storages[obj_key] = obj
            is_view = obj._cdata != obj._cdata
            if is_view:
                view_metadata = (str(obj._cdata), offset, obj.size())
            else:
                view_metadata = None

            return ('storage',
                    storage_type,
                    obj_key,
                    location,
                    obj.size(),
                    view_metadata)
        return None

    sys_info = dict(
        protocol_version=PROTOCOL_VERSION,
        little_endian=sys.byteorder == 'little',
        type_sizes=dict(
            short=SHORT_SIZE,
            int=INT_SIZE,
            long=LONG_SIZE,
        ),
    )

    pickle_module.dump(MAGIC_NUMBER, f, protocol=pickle_protocol)
    pickle_module.dump(PROTOCOL_VERSION, f, protocol=pickle_protocol)
    pickle_module.dump(sys_info, f, protocol=pickle_protocol)
    pickler = pickle_module.Pickler(f, protocol=pickle_protocol)
    pickler.persistent_id = persistent_id
    pickler.dump(obj)

    serialized_storage_keys = sorted(serialized_storages.keys())
    pickle_module.dump(serialized_storage_keys, f, protocol=pickle_protocol)
    f.flush()
    for key in serialized_storage_keys:
        serialized_storages[key]._write_file(f, _should_read_directly(f), True)


def _save(obj, zip_file, pickle_module, pickle_protocol):
    serialized_storages = {}
    id_map: Dict[int, str] = {}

    def persistent_id(obj):
        # FIXME: the docs say that persistent_id should only return a string
        # but torch store returns tuples. This works only in the binary protocol
        # see
        # https://docs.python.org/2/library/pickle.html#pickling-and-unpickling-external-objects
        # https://github.com/python/cpython/blob/master/Lib/pickle.py#L527-L537
        if torch.is_storage(obj):
            storage_type = normalize_storage_type(type(obj))
            obj_key = id_map.setdefault(obj._cdata, str(len(id_map)))
            location = location_tag(obj)
            serialized_storages[obj_key] = obj

            return ('storage',
                    storage_type,
                    obj_key,
                    location,
                    obj.size())
        return None

    # Write the pickle data for `obj`
    data_buf = io.BytesIO()
    pickler = pickle_module.Pickler(data_buf, protocol=pickle_protocol)
    pickler.persistent_id = persistent_id
    pickler.dump(obj)
    data_value = data_buf.getvalue()
    zip_file.write_record('data.pkl', data_value, len(data_value))

    # Write each tensor to a file named tensor/the_tensor_key in the zip archive
    for key in sorted(serialized_storages.keys()):
        name = f'data/{key}'
        storage = serialized_storages[key]
        # given that we copy things around anyway, we might use storage.cpu()
        # this means to that to get tensors serialized, you need to implement
        # .cpu() on the underlying Storage
        if storage.device.type != 'cpu':
            storage = storage.cpu()
        # Now that it is on the CPU we can directly copy it into the zip file
        num_bytes = storage.size() * storage.element_size()
        zip_file.write_record(name, storage.data_ptr(), num_bytes)


[docs]def load(f, map_location=None, pickle_module=pickle, **pickle_load_args):
    # Reference: https://github.com/pytorch/pytorch/issues/54354
    # The first line of this docstring overrides the one Sphinx generates for the
    # documentation. We need it so that Sphinx doesn't leak `pickle`s path from
    # the build environment (e.g. `<module 'pickle' from '/leaked/path').

    """load(f, map_location=None, pickle_module=pickle, **pickle_load_args)

    Loads an object saved with :func:`torch.save` from a file.

    :func:`torch.load` uses Python's unpickling facilities but treats storages,
    which underlie tensors, specially. They are first deserialized on the
    CPU and are then moved to the device they were saved from. If this fails
    (e.g. because the run time system doesn't have certain devices), an exception
    is raised. However, storages can be dynamically remapped to an alternative
    set of devices using the :attr:`map_location` argument.

    If :attr:`map_location` is a callable, it will be called once for each serialized
    storage with two arguments: storage and location. The storage argument
    will be the initial deserialization of the storage, residing on the CPU.
    Each serialized storage has a location tag associated with it which
    identifies the device it was saved from, and this tag is the second
    argument passed to :attr:`map_location`. The builtin location tags are ``'cpu'``
    for CPU tensors and ``'cuda:device_id'`` (e.g. ``'cuda:2'``) for CUDA tensors.
    :attr:`map_location` should return either ``None`` or a storage. If
    :attr:`map_location` returns a storage, it will be used as the final deserialized
    object, already moved to the right device. Otherwise, :func:`torch.load` will
    fall back to the default behavior, as if :attr:`map_location` wasn't specified.

    If :attr:`map_location` is a :class:`torch.device` object or a string containing
    a device tag, it indicates the location where all tensors should be loaded.

    Otherwise, if :attr:`map_location` is a dict, it will be used to remap location tags
    appearing in the file (keys), to ones that specify where to put the
    storages (values).

    User extensions can register their own location tags and tagging and
    deserialization methods using :func:`torch.serialization.register_package`.

    Args:
        f: a file-like object (has to implement :meth:`read`, :meth:`readline`, :meth:`tell`, and :meth:`seek`),
            or a string or os.PathLike object containing a file name
        map_location: a function, :class:`torch.device`, string or a dict specifying how to remap storage
            locations
        pickle_module: module used for unpickling metadata and objects (has to
            match the :attr:`pickle_module` used to serialize file)
        pickle_load_args: (Python 3 only) optional keyword arguments passed over to
            :func:`pickle_module.load` and :func:`pickle_module.Unpickler`, e.g.,
            :attr:`errors=...`.

    .. warning::
        :func:`torch.load()` uses ``pickle`` module implicitly, which is known to be insecure.
        It is possible to construct malicious pickle data which will execute arbitrary code
        during unpickling. Never load data that could have come from an untrusted
        source, or that could have been tampered with. **Only load data you trust**.

    .. note::
        When you call :func:`torch.load()` on a file which contains GPU tensors, those tensors
        will be loaded to GPU by default. You can call ``torch.load(.., map_location='cpu')``
        and then :meth:`load_state_dict` to avoid GPU RAM surge when loading a model checkpoint.

    .. note::
        By default, we decode byte strings as ``utf-8``.  This is to avoid a common error
        case ``UnicodeDecodeError: 'ascii' codec can't decode byte 0x...``
        when loading files saved by Python 2 in Python 3.  If this default
        is incorrect, you may use an extra :attr:`encoding` keyword argument to specify how
        these objects should be loaded, e.g., :attr:`encoding='latin1'` decodes them
        to strings using ``latin1`` encoding, and :attr:`encoding='bytes'` keeps them
        as byte arrays which can be decoded later with ``byte_array.decode(...)``.

    Example:
        >>> torch.load('tensors.pt')
        # Load all tensors onto the CPU
        >>> torch.load('tensors.pt', map_location=torch.device('cpu'))
        # Load all tensors onto the CPU, using a function
        >>> torch.load('tensors.pt', map_location=lambda storage, loc: storage)
        # Load all tensors onto GPU 1
        >>> torch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))
        # Map tensors from GPU 1 to GPU 0
        >>> torch.load('tensors.pt', map_location={'cuda:1':'cuda:0'})
        # Load tensor from io.BytesIO object
        >>> with open('tensor.pt', 'rb') as f:
        ...     buffer = io.BytesIO(f.read())
        >>> torch.load(buffer)
        # Load a module with 'ascii' encoding for unpickling
        >>> torch.load('module.pt', encoding='ascii')
    """
    _check_dill_version(pickle_module)

    if 'encoding' not in pickle_load_args.keys():
        pickle_load_args['encoding'] = 'utf-8'

    with _open_file_like(f, 'rb') as opened_file:
        if _is_zipfile(opened_file):
            # The zipfile reader is going to advance the current file position.
            # If we want to actually tail call to torch.jit.load, we need to
            # reset back to the original position.
            orig_position = opened_file.tell()
            with _open_zipfile_reader(opened_file) as opened_zipfile:
                if _is_torchscript_zip(opened_zipfile):
                    warnings.warn("'torch.load' received a zip file that looks like a TorchScript archive"
                                  " dispatching to 'torch.jit.load' (call 'torch.jit.load' directly to"
                                  " silence this warning)", UserWarning)
                    opened_file.seek(orig_position)
                    return torch.jit.load(opened_file)
                return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
        return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)


# Register pickling support for layout instances such as
# torch.sparse_coo, etc
def _get_layout(name):
    """Get layout extension object from its string representation.
    """
    cache = _get_layout.cache   # type: ignore[attr-defined]
    if not cache:
        for v in torch.__dict__.values():
            if isinstance(v, torch.layout):
                cache[str(v)] = v
    return cache[name]

# There are yet not good way to type annotate function attributes https://github.com/python/mypy/issues/2087
_get_layout.cache = {}   # type: ignore[attr-defined]
copyreg.pickle(torch.layout, lambda obj: (_get_layout, (str(obj),)))


def _legacy_load(f, map_location, pickle_module, **pickle_load_args):
    deserialized_objects: Dict[int, Any] = {}

    restore_location = _get_restore_location(map_location)

    def _check_container_source(container_type, source_file, original_source):
        try:
            current_source = ''.join(get_source_lines_and_file(container_type)[0])
        except Exception:  # saving the source is optional, so we can ignore any errors
            warnings.warn("Couldn't retrieve source code for container of "
                          "type " + container_type.__name__ + ". It won't be checked "
                          "for correctness upon loading.")
            return
        if original_source != current_source:
            if container_type.dump_patches:
                file_name = container_type.__name__ + '.patch'
                diff = difflib.unified_diff(current_source.split('\n'),
                                            original_source.split('\n'),
                                            source_file,
                                            source_file, lineterm="")
                lines = '\n'.join(diff)
                try:
                    with open(file_name, 'a+') as f:
                        file_size = f.seek(0, 2)
                        f.seek(0)
                        if file_size == 0:
                            f.write(lines)
                        elif file_size != len(lines) or f.read() != lines:
                            raise IOError
                    msg = ("Saved a reverse patch to " + file_name + ". "
                           "Run `patch -p0 < " + file_name + "` to revert your "
                           "changes.")
                except IOError:
                    msg = ("Tried to save a patch, but couldn't create a "
                           "writable file " + file_name + ". Make sure it "
                           "doesn't exist and your working directory is "
                           "writable.")
            else:
                msg = ("you can retrieve the original source code by "
                       "accessing the object's source attribute or set "
                       "`torch.nn.Module.dump_patches = True` and use the "
                       "patch tool to revert the changes.")
            msg = f"source code of class '{torch.typename(container_type)}' has changed. {msg}"
            warnings.warn(msg, SourceChangeWarning)

    def legacy_load(f):
        deserialized_objects: Dict[int, Any] = {}

        def persistent_load(saved_id):
            if isinstance(saved_id, tuple):
                # Ignore containers that don't have any sources saved
                if all(saved_id[1:]):
                    _check_container_source(*saved_id)
                return saved_id[0]
            return deserialized_objects[int(saved_id)]

        with closing(tarfile.open(fileobj=f, mode='r:', format=tarfile.PAX_FORMAT)) as tar, \
                mkdtemp() as tmpdir:

            tar.extract('storages', path=tmpdir)
            with open(os.path.join(tmpdir, 'storages'), 'rb', 0) as f:
                num_storages = pickle_module.load(f, **pickle_load_args)
                for i in range(num_storages):
                    args = pickle_module.load(f, **pickle_load_args)
                    key, location, storage_type = args
                    obj = storage_type._new_with_file(f)
                    obj = restore_location(obj, location)
                    deserialized_objects[key] = obj

                storage_views = pickle_module.load(f, **pickle_load_args)
                for target_cdata, root_cdata, offset, size in storage_views:
                    root = deserialized_objects[root_cdata]
                    deserialized_objects[target_cdata] = root[offset:offset + size]

            tar.extract('tensors', path=tmpdir)
            with open(os.path.join(tmpdir, 'tensors'), 'rb', 0) as f:
                num_tensors = pickle_module.load(f, **pickle_load_args)
                for _ in range(num_tensors):
                    args = pickle_module.load(f, **pickle_load_args)
                    key, storage_id, original_tensor_type = args
                    storage = deserialized_objects[storage_id]
                    tensor_type = storage_to_tensor_type(storage)
                    ndim, = struct.unpack('<i', f.read(4))
                    # skip next 4 bytes; legacy encoding treated ndim as 8 bytes
                    f.read(4)
                    size = struct.unpack(f'<{ndim}q', f.read(8 * ndim))
                    stride = struct.unpack(f'<{ndim}q', f.read(8 * ndim))
                    storage_offset, = struct.unpack('<q', f.read(8))
                    tensor = tensor_type().set_(storage, storage_offset, size, stride)
                    deserialized_objects[key] = tensor

            pickle_file = tar.extractfile('pickle')
            unpickler = pickle_module.Unpickler(pickle_file, **pickle_load_args)
            unpickler.persistent_load = persistent_load
            result = unpickler.load()
            return result

    deserialized_objects = {}

    def persistent_load(saved_id):
        assert isinstance(saved_id, tuple)
        typename = _maybe_decode_ascii(saved_id[0])
        data = saved_id[1:]

        if typename == 'module':
            # Ignore containers that don't have any sources saved
            if all(data[1:]):
                _check_container_source(*data)
            return data[0]
        elif typename == 'storage':
            data_type, root_key, location, size, view_metadata = data
            location = _maybe_decode_ascii(location)
            if root_key not in deserialized_objects:
                obj = data_type(size)
                obj._torch_load_uninitialized = True
                deserialized_objects[root_key] = restore_location(obj, location)
            storage = deserialized_objects[root_key]
            if view_metadata is not None:
                view_key, offset, view_size = view_metadata
                if view_key not in deserialized_objects:
                    deserialized_objects[view_key] = storage[offset:offset + view_size]
                return deserialized_objects[view_key]
            else:
                return storage
        else:
            raise RuntimeError("Unknown saved id type: %s" % saved_id[0])

    _check_seekable(f)
    f_should_read_directly = _should_read_directly(f)

    if f_should_read_directly and f.tell() == 0:
        # legacy_load requires that f has fileno()
        # only if offset is zero we can attempt the legacy tar file loader
        try:
            return legacy_load(f)
        except tarfile.TarError:
            if _is_zipfile(f):
                # .zip is used for torch.jit.save and will throw an un-pickling error here
                raise RuntimeError(
                    f"{f.name} is a zip archive (did you mean to use torch.jit.load()?)") from None
            # if not a tarfile, reset file offset and proceed
            f.seek(0)

    if not hasattr(f, 'readinto') and (3, 8, 0) <= sys.version_info < (3, 8, 2):
        raise RuntimeError(
            "torch.load does not work with file-like objects that do not implement readinto on Python 3.8.0 and 3.8.1. "
            f"Received object of type \"{type(f)}\". Please update to Python 3.8.2 or newer to restore this "
            "functionality.")

    magic_number = pickle_module.load(f, **pickle_load_args)
    if magic_number != MAGIC_NUMBER:
        raise RuntimeError("Invalid magic number; corrupt file?")
    protocol_version = pickle_module.load(f, **pickle_load_args)
    if protocol_version != PROTOCOL_VERSION:
        raise RuntimeError("Invalid protocol version: %s" % protocol_version)

    _sys_info = pickle_module.load(f, **pickle_load_args)
    unpickler = pickle_module.Unpickler(f, **pickle_load_args)
    unpickler.persistent_load = persistent_load
    result = unpickler.load()

    deserialized_storage_keys = pickle_module.load(f, **pickle_load_args)

    offset = f.tell() if f_should_read_directly else None
    for key in deserialized_storage_keys:
        assert key in deserialized_objects
        deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
        if offset is not None:
            offset = f.tell()

    torch._utils._validate_loaded_sparse_tensors()

    return result


def _maybe_decode_ascii(bytes_str: Union[bytes, str]) -> str:
    # When using encoding='bytes' in Py3, some **internal** keys stored as
    # strings in Py2 are loaded as bytes. This function decodes them with
    # ascii encoding, one that Py3 uses by default.
    #
    # NOTE: This should only be used on internal keys (e.g., `typename` and
    #       `location` in `persistent_load` below!
    if isinstance(bytes_str, bytes):
        return bytes_str.decode('ascii')
    return bytes_str


def _get_restore_location(map_location):
    if map_location is None:
        restore_location = default_restore_location
    elif isinstance(map_location, dict):
        def restore_location(storage, location):
            location = map_location.get(location, location)
            return default_restore_location(storage, location)
    elif isinstance(map_location, _string_classes):
        def restore_location(storage, location):
            return default_restore_location(storage, map_location)
    elif isinstance(map_location, torch.device):
        def restore_location(storage, location):
            return default_restore_location(storage, str(map_location))
    else:
        def restore_location(storage, location):
            result = map_location(storage, location)
            if result is None:
                result = default_restore_location(storage, location)
            return result
    return restore_location

def _load(zip_file, map_location, pickle_module, pickle_file='data.pkl', **pickle_load_args):
    restore_location = _get_restore_location(map_location)

    loaded_storages = {}

    def load_tensor(data_type, size, key, location):
        name = f'data/{key}'
        dtype = data_type(0).dtype

        storage = zip_file.get_storage_from_record(name, size, dtype).storage()
        loaded_storages[key] = restore_location(storage, location)

    def persistent_load(saved_id):
        assert isinstance(saved_id, tuple)
        typename = _maybe_decode_ascii(saved_id[0])
        data = saved_id[1:]

        assert typename == 'storage', \
            f"Unknown typename for persistent_load, expected 'storage' but got '{typename}'"
        data_type, key, location, size = data
        if key not in loaded_storages:
            load_tensor(data_type, size, key, _maybe_decode_ascii(location))
        storage = loaded_storages[key]
        return storage

    load_module_mapping: Dict[str, str] = {
        # See https://github.com/pytorch/pytorch/pull/51633
        'torch.tensor': 'torch._tensor'
    }

    # Need to subclass Unpickler instead of directly monkey-patching the find_class method
    # because it's marked readonly in pickle.
    # The type: ignore is because mypy can't statically determine the type of this class.
    class UnpicklerWrapper(pickle_module.Unpickler):  # type: ignore[name-defined]
        # from https://stackoverflow.com/questions/13398462/unpickling-python-objects-with-a-changed-module-path/13405732
        # Lets us override the imports that pickle uses when unpickling an object.
        # This is useful for maintaining BC if we change a module path that tensor instantiation relies on.
        def find_class(self, mod_name, name):
            mod_name = load_module_mapping.get(mod_name, mod_name)
            return super().find_class(mod_name, name)

    # Load the data (which may in turn use `persistent_load` to load tensors)
    data_file = io.BytesIO(zip_file.get_record(pickle_file))

    unpickler = UnpicklerWrapper(data_file, **pickle_load_args)
    unpickler.persistent_load = persistent_load
    result = unpickler.load()

    torch._utils._validate_loaded_sparse_tensors()

    return result


def _is_torchscript_zip(zip_file):
    return 'constants.pkl' in zip_file.get_all_records()

I'd suggest not to stick to the index, because this approach will not scale over time and leads to duplicate code. My suggestion is to introduce new field inside each data object. You can do it with the loop on the client. And then collapse/expand based on that field. See the following updated code for app.ts:


//our root app component
import {Component, NgModule, VERSION} from '@angular/core'
import {BrowserModule} from '@angular/platform-browser'

@Component({
  selector: 'my-app',
  template: `
     <div>
        <ol>
            <li *ngFor="let guide of userGuideData; let i=index">
                <a (click)="toggleGuide(guide)">{{guide.title}}</a>
                <div *ngIf="guide.isExpanded">
                        <ol>
                            <li *ngFor="let desc of guide.description">{{desc}}</li>
                        </ol>
                </div>
            </li>
        </ol>
    </div>
  `,
})
export class App {
  name:string;
  public userGuideData: any = [
    {
      title: 'Pre-requisities',
      description: ['Windows 10, i7/i5, 8GB/16GB RAM, HDD 300GB/500GB',
                    'Total size of the deployment : 1.75 GB. The system should atleast have 2GB free space'
    ]
    },
    {
      title: 'Installation Guide',
      description: ['Step 1:  Install Docker. a) For Windows: Follow the instructions from the link https://docs.docker.com/docker-for-windows/install/#download-docker-for-windows and install the latest stable version of docker. b) For Mac: Follow the instructions from the link https://docs.docker.com/docker-for-mac/install/#download-docker-for-mac and install the latest stable version of docker. Alternatively you can also use homebrew to install docker',
                    'Step 2: Once the docker installation is completed. Open the docker settings and go to advanced settings tab. On Windows increase the CPUs to 2 and Memory to 3GB. On Mac you can set it upto 4 CPUs and Memory of 6GB',
                    'Step 3: Unzip the deploymentpackage.zip file. Traverse to this folder on command line and execute the start.sh shell script. During the first time, setup will take time(minimum around 30 mins) to finish as we are downloading images to the local. For the subsequent runs it will take only few minutes to set up, depending on the speed of the system. Once the set up is done the console will display a message given below.',
                    'Step 4: Once the set up is successful. Open any browser and enter http://localhost:8080.  Make sure to free the 8080 port on your local system',
                    'Step 5: Login page appears. Enter the username ‘admin’ and password ‘admin’ and login to the tool.',
                    'Step 6: On launch you will be directed to the HomePage. Upload a zip file with the configurations, using the Upload configurations button. The input file should have a “.zip” extension only. Each device configuration in the zip file should end with a  “.txt” extension only.',
                    'Step 7: Once the upload is successful you will be directed to the Reports page. Initially the tables in the page will be empty. Click the Refresh button recursively to see the current status of the execution.  Once all the devices are processed, press the Download Reports button to download the report in an excel format. Please note that since this is a streaming architecture based application, there will not be any message rendered to signify the completion of the execution. This feature will be provided in the upcoming releases.',
                    'Step 8: For further uploads navigate to the Homepage and follow Steps 6 and 7',
                    'Step 9: For exiting the application, press cross button on the browser.',
                    'Step 10: For exiting the docker, execute the stop.sh shell script given in the deploymentpackage.zip. This will make sure the system is cleaned up(which mean that all data will erased)',
                    'Note: Please download the reports before exiting the docker',

    ]
    },
    {
      title: 'Known issues',
      description: ['Once the docker is stopped, the data will not be persisted. Download the reports and store them for future reference',
                    'The end of report generation will not be notified to the user. The latest status can be known by clicking the refresh button recursively',
                    'Configs with insufficient data are highlighted in the Fallout section of the report. Refer to the “fallout” sheet in the downloaded report for the same',
                    'This tool is not production ready. To make it production ready, tool needs to go through CSDL compliance process'
    ]
    },
    {
      title: 'Performance Statistics under lab conditions',
      description: ['First time application set up time: 27-30 mins',
                    'Total size of the deployment : 1.75 gb',
                    'Tool execution Time 100 configurations of a total 2.5MB takes around 3 mins. 500 configurations of a total 13MB takes around 15.30 mins. 1000 configurations of a total 16MB takes around 27 mins',
                    '7 valid configs shared by the NCEs have been modified to come up to the numbers of 100,500 and 1000 for testing. Note that this is just a ball park data'
    ]
    },
    {
      title: 'CDET details',
      description: ['Project: CSC.cacsp',
                    'Product: mig-sda-dna',
                    'Component: ui/backend/ic',
                    'Version: v0.5'
    ]
    },
    {
      title: 'Team alias',
      description: ['abc@abc.com']
    },
    {
      title: 'FAQ',
      description: ['Whom to contact for issues related to tool? Answer: Send all queries to team alias abc@abc.com',
                    'How to check if the processing of the upload is complete? Answer: Once the upload is successful you will be directed to the Reports page. Initially the tables in the page will be empty. Click the Refresh button recursively to see the current status of the execution. All valid configurations will be displayed in the Summary tab, the invalid configurations will be displayed in the  Fallout tab',
                    'How to do crash recovery ? Answer: Since this is an on prem docker based deliverable, there will be times when the application might crash if the system resources are not sufficient. In case you face such issues we recommend you to restart the application and/or increase the docker resources by following the Step 2 of installation guide section. If the issue still persists report the same',
                    'Can this tool be used for customer deliverables? Answer: A strict “NO”. The tool is not security compliant and should not be used for processing customer data. Future releases will be addressing this issue'
    ]
    }
  ]

  constructor() {
    this.userGuideData.forEach((element) => {
      element.isExpanded = false;
    }
  }

  toggleGuide(guide) {
    guide.isExpanded = !guide.isExpanded;
  }
}

@NgModule({
  imports: [ BrowserModule ],
  declarations: [ App ],
  bootstrap: [ App ]
})
export class AppModule {}
stable

from tornado.escape import _unicode
from tornado import gen, version
from tornado.httpclient import (
    HTTPResponse,
    HTTPError,
    AsyncHTTPClient,
    main,
    _RequestProxy,
    HTTPRequest,
)
from tornado import httputil
from tornado.http1connection import HTTP1Connection, HTTP1ConnectionParameters
from tornado.ioloop import IOLoop
from tornado.iostream import StreamClosedError, IOStream
from tornado.netutil import (
    Resolver,
    OverrideResolver,
    _client_ssl_defaults,
    is_valid_ip,
)
from tornado.log import gen_log
from tornado.tcpclient import TCPClient

import base64
import collections
import copy
import functools
import re
import socket
import ssl
import sys
import time
from io import BytesIO
import urllib.parse

from typing import Dict, Any, Callable, Optional, Type, Union
from types import TracebackType
import typing

if typing.TYPE_CHECKING:
    from typing import Deque, Tuple, List  # noqa: F401


class HTTPTimeoutError(HTTPError):
    """Error raised by SimpleAsyncHTTPClient on timeout.

    For historical reasons, this is a subclass of `.HTTPClientError`
    which simulates a response code of 599.

    .. versionadded:: 5.1
    """

    def __init__(self, message: str) -> None:
        super().__init__(599, message=message)

    def __str__(self) -> str:
        return self.message or "Timeout"


class HTTPStreamClosedError(HTTPError):
    """Error raised by SimpleAsyncHTTPClient when the underlying stream is closed.

    When a more specific exception is available (such as `ConnectionResetError`),
    it may be raised instead of this one.

    For historical reasons, this is a subclass of `.HTTPClientError`
    which simulates a response code of 599.

    .. versionadded:: 5.1
    """

    def __init__(self, message: str) -> None:
        super().__init__(599, message=message)

    def __str__(self) -> str:
        return self.message or "Stream closed"


[docs]class SimpleAsyncHTTPClient(AsyncHTTPClient):
    """Non-blocking HTTP client with no external dependencies.

    This class implements an HTTP 1.1 client on top of Tornado's IOStreams.
    Some features found in the curl-based AsyncHTTPClient are not yet
    supported.  In particular, proxies are not supported, connections
    are not reused, and callers cannot select the network interface to be
    used.
    """

[docs]    def initialize(  # type: ignore
        self,
        max_clients: int = 10,
        hostname_mapping: Optional[Dict[str, str]] = None,
        max_buffer_size: int = 104857600,
        resolver: Optional[Resolver] = None,
        defaults: Optional[Dict[str, Any]] = None,
        max_header_size: Optional[int] = None,
        max_body_size: Optional[int] = None,
    ) -> None:
        """Creates a AsyncHTTPClient.

        Only a single AsyncHTTPClient instance exists per IOLoop
        in order to provide limitations on the number of pending connections.
        ``force_instance=True`` may be used to suppress this behavior.

        Note that because of this implicit reuse, unless ``force_instance``
        is used, only the first call to the constructor actually uses
        its arguments. It is recommended to use the ``configure`` method
        instead of the constructor to ensure that arguments take effect.

        ``max_clients`` is the number of concurrent requests that can be
        in progress; when this limit is reached additional requests will be
        queued. Note that time spent waiting in this queue still counts
        against the ``request_timeout``.

        ``hostname_mapping`` is a dictionary mapping hostnames to IP addresses.
        It can be used to make local DNS changes when modifying system-wide
        settings like ``/etc/hosts`` is not possible or desirable (e.g. in
        unittests).

        ``max_buffer_size`` (default 100MB) is the number of bytes
        that can be read into memory at once. ``max_body_size``
        (defaults to ``max_buffer_size``) is the largest response body
        that the client will accept.  Without a
        ``streaming_callback``, the smaller of these two limits
        applies; with a ``streaming_callback`` only ``max_body_size``
        does.

        .. versionchanged:: 4.2
           Added the ``max_body_size`` argument.
        """
        super().initialize(defaults=defaults)
        self.max_clients = max_clients
        self.queue = (
            collections.deque()
        )  # type: Deque[Tuple[object, HTTPRequest, Callable[[HTTPResponse], None]]]
        self.active = (
            {}
        )  # type: Dict[object, Tuple[HTTPRequest, Callable[[HTTPResponse], None]]]
        self.waiting = (
            {}
        )  # type: Dict[object, Tuple[HTTPRequest, Callable[[HTTPResponse], None], object]]
        self.max_buffer_size = max_buffer_size
        self.max_header_size = max_header_size
        self.max_body_size = max_body_size
        # TCPClient could create a Resolver for us, but we have to do it
        # ourselves to support hostname_mapping.
        if resolver:
            self.resolver = resolver
            self.own_resolver = False
        else:
            self.resolver = Resolver()
            self.own_resolver = True
        if hostname_mapping is not None:
            self.resolver = OverrideResolver(
                resolver=self.resolver, mapping=hostname_mapping
            )
        self.tcp_client = TCPClient(resolver=self.resolver)

    def close(self) -> None:
        super().close()
        if self.own_resolver:
            self.resolver.close()
        self.tcp_client.close()

    def fetch_impl(
        self, request: HTTPRequest, callback: Callable[[HTTPResponse], None]
    ) -> None:
        key = object()
        self.queue.append((key, request, callback))
        assert request.connect_timeout is not None
        assert request.request_timeout is not None
        timeout_handle = None
        if len(self.active) >= self.max_clients:
            timeout = (
                min(request.connect_timeout, request.request_timeout)
                or request.connect_timeout
                or request.request_timeout
            )  # min but skip zero
            if timeout:
                timeout_handle = self.io_loop.add_timeout(
                    self.io_loop.time() + timeout,
                    functools.partial(self._on_timeout, key, "in request queue"),
                )
        self.waiting[key] = (request, callback, timeout_handle)
        self._process_queue()
        if self.queue:
            gen_log.debug(
                "max_clients limit reached, request queued. "
                "%d active, %d queued requests." % (len(self.active), len(self.queue))
            )

    def _process_queue(self) -> None:
        while self.queue and len(self.active) < self.max_clients:
            key, request, callback = self.queue.popleft()
            if key not in self.waiting:
                continue
            self._remove_timeout(key)
            self.active[key] = (request, callback)
            release_callback = functools.partial(self._release_fetch, key)
            self._handle_request(request, release_callback, callback)

    def _connection_class(self) -> type:
        return _HTTPConnection

    def _handle_request(
        self,
        request: HTTPRequest,
        release_callback: Callable[[], None],
        final_callback: Callable[[HTTPResponse], None],
    ) -> None:
        self._connection_class()(
            self,
            request,
            release_callback,
            final_callback,
            self.max_buffer_size,
            self.tcp_client,
            self.max_header_size,
            self.max_body_size,
        )

    def _release_fetch(self, key: object) -> None:
        del self.active[key]
        self._process_queue()

    def _remove_timeout(self, key: object) -> None:
        if key in self.waiting:
            request, callback, timeout_handle = self.waiting[key]
            if timeout_handle is not None:
                self.io_loop.remove_timeout(timeout_handle)
            del self.waiting[key]

    def _on_timeout(self, key: object, info: Optional[str] = None) -> None:
        """Timeout callback of request.

        Construct a timeout HTTPResponse when a timeout occurs.

        :arg object key: A simple object to mark the request.
        :info string key: More detailed timeout information.
        """
        request, callback, timeout_handle = self.waiting[key]
        self.queue.remove((key, request, callback))

        error_message = "Timeout {0}".format(info) if info else "Timeout"
        timeout_response = HTTPResponse(
            request,
            599,
            error=HTTPTimeoutError(error_message),
            request_time=self.io_loop.time() - request.start_time,
        )
        self.io_loop.add_callback(callback, timeout_response)
        del self.waiting[key]


class _HTTPConnection(httputil.HTTPMessageDelegate):
    _SUPPORTED_METHODS = set(
        ["GET", "HEAD", "POST", "PUT", "DELETE", "PATCH", "OPTIONS"]
    )

    def __init__(
        self,
        client: Optional[SimpleAsyncHTTPClient],
        request: HTTPRequest,
        release_callback: Callable[[], None],
        final_callback: Callable[[HTTPResponse], None],
        max_buffer_size: int,
        tcp_client: TCPClient,
        max_header_size: int,
        max_body_size: int,
    ) -> None:
        self.io_loop = IOLoop.current()
        self.start_time = self.io_loop.time()
        self.start_wall_time = time.time()
        self.client = client
        self.request = request
        self.release_callback = release_callback
        self.final_callback = final_callback
        self.max_buffer_size = max_buffer_size
        self.tcp_client = tcp_client
        self.max_header_size = max_header_size
        self.max_body_size = max_body_size
        self.code = None  # type: Optional[int]
        self.headers = None  # type: Optional[httputil.HTTPHeaders]
        self.chunks = []  # type: List[bytes]
        self._decompressor = None
        # Timeout handle returned by IOLoop.add_timeout
        self._timeout = None  # type: object
        self._sockaddr = None
        IOLoop.current().add_future(
            gen.convert_yielded(self.run()), lambda f: f.result()
        )

    async def run(self) -> None:
        try:
            self.parsed = urllib.parse.urlsplit(_unicode(self.request.url))
            if self.parsed.scheme not in ("http", "https"):
                raise ValueError("Unsupported url scheme: %s" % self.request.url)
            # urlsplit results have hostname and port results, but they
            # didn't support ipv6 literals until python 2.7.
            netloc = self.parsed.netloc
            if "@" in netloc:
                userpass, _, netloc = netloc.rpartition("@")
            host, port = httputil.split_host_and_port(netloc)
            if port is None:
                port = 443 if self.parsed.scheme == "https" else 80
            if re.match(r"^\[.*\]$", host):
                # raw ipv6 addresses in urls are enclosed in brackets
                host = host[1:-1]
            self.parsed_hostname = host  # save final host for _on_connect

            if self.request.allow_ipv6 is False:
                af = socket.AF_INET
            else:
                af = socket.AF_UNSPEC

            ssl_options = self._get_ssl_options(self.parsed.scheme)

            source_ip = None
            if self.request.network_interface:
                if is_valid_ip(self.request.network_interface):
                    source_ip = self.request.network_interface
                else:
                    raise ValueError(
                        "Unrecognized IPv4 or IPv6 address for network_interface, got %r"
                        % (self.request.network_interface,)
                    )

            timeout = (
                min(self.request.connect_timeout, self.request.request_timeout)
                or self.request.connect_timeout
                or self.request.request_timeout
            )  # min but skip zero
            if timeout:
                self._timeout = self.io_loop.add_timeout(
                    self.start_time + timeout,
                    functools.partial(self._on_timeout, "while connecting"),
                )
            stream = await self.tcp_client.connect(
                host,
                port,
                af=af,
                ssl_options=ssl_options,
                max_buffer_size=self.max_buffer_size,
                source_ip=source_ip,
            )

            if self.final_callback is None:
                # final_callback is cleared if we've hit our timeout.
                stream.close()
                return
            self.stream = stream
            self.stream.set_close_callback(self.on_connection_close)
            self._remove_timeout()
            if self.final_callback is None:
                return
            if self.request.request_timeout:
                self._timeout = self.io_loop.add_timeout(
                    self.start_time + self.request.request_timeout,
                    functools.partial(self._on_timeout, "during request"),
                )
            if (
                self.request.method not in self._SUPPORTED_METHODS
                and not self.request.allow_nonstandard_methods
            ):
                raise KeyError("unknown method %s" % self.request.method)
            for key in (
                "proxy_host",
                "proxy_port",
                "proxy_username",
                "proxy_password",
                "proxy_auth_mode",
            ):
                if getattr(self.request, key, None):
                    raise NotImplementedError("%s not supported" % key)
            if "Connection" not in self.request.headers:
                self.request.headers["Connection"] = "close"
            if "Host" not in self.request.headers:
                if "@" in self.parsed.netloc:
                    self.request.headers["Host"] = self.parsed.netloc.rpartition("@")[
                        -1
                    ]
                else:
                    self.request.headers["Host"] = self.parsed.netloc
            username, password = None, None
            if self.parsed.username is not None:
                username, password = self.parsed.username, self.parsed.password
            elif self.request.auth_username is not None:
                username = self.request.auth_username
                password = self.request.auth_password or ""
            if username is not None:
                assert password is not None
                if self.request.auth_mode not in (None, "basic"):
                    raise ValueError("unsupported auth_mode %s", self.request.auth_mode)
                self.request.headers["Authorization"] = "Basic " + _unicode(
                    base64.b64encode(
                        httputil.encode_username_password(username, password)
                    )
                )
            if self.request.user_agent:
                self.request.headers["User-Agent"] = self.request.user_agent
            elif self.request.headers.get("User-Agent") is None:
                self.request.headers["User-Agent"] = "Tornado/{}".format(version)
            if not self.request.allow_nonstandard_methods:
                # Some HTTP methods nearly always have bodies while others
                # almost never do. Fail in this case unless the user has
                # opted out of sanity checks with allow_nonstandard_methods.
                body_expected = self.request.method in ("POST", "PATCH", "PUT")
                body_present = (
                    self.request.body is not None
                    or self.request.body_producer is not None
                )
                if (body_expected and not body_present) or (
                    body_present and not body_expected
                ):
                    raise ValueError(
                        "Body must %sbe None for method %s (unless "
                        "allow_nonstandard_methods is true)"
                        % ("not " if body_expected else "", self.request.method)
                    )
            if self.request.expect_100_continue:
                self.request.headers["Expect"] = "100-continue"
            if self.request.body is not None:
                # When body_producer is used the caller is responsible for
                # setting Content-Length (or else chunked encoding will be used).
                self.request.headers["Content-Length"] = str(len(self.request.body))
            if (
                self.request.method == "POST"
                and "Content-Type" not in self.request.headers
            ):
                self.request.headers[
                    "Content-Type"
                ] = "application/x-www-form-urlencoded"
            if self.request.decompress_response:
                self.request.headers["Accept-Encoding"] = "gzip"
            req_path = (self.parsed.path or "/") + (
                ("?" + self.parsed.query) if self.parsed.query else ""
            )
            self.connection = self._create_connection(stream)
            start_line = httputil.RequestStartLine(self.request.method, req_path, "")
            self.connection.write_headers(start_line, self.request.headers)
            if self.request.expect_100_continue:
                await self.connection.read_response(self)
            else:
                await self._write_body(True)
        except Exception:
            if not self._handle_exception(*sys.exc_info()):
                raise

    def _get_ssl_options(
        self, scheme: str
    ) -> Union[None, Dict[str, Any], ssl.SSLContext]:
        if scheme == "https":
            if self.request.ssl_options is not None:
                return self.request.ssl_options
            # If we are using the defaults, don't construct a
            # new SSLContext.
            if (
                self.request.validate_cert
                and self.request.ca_certs is None
                and self.request.client_cert is None
                and self.request.client_key is None
            ):
                return _client_ssl_defaults
            ssl_ctx = ssl.create_default_context(
                ssl.Purpose.SERVER_AUTH, cafile=self.request.ca_certs
            )
            if not self.request.validate_cert:
                ssl_ctx.check_hostname = False
                ssl_ctx.verify_mode = ssl.CERT_NONE
            if self.request.client_cert is not None:
                ssl_ctx.load_cert_chain(
                    self.request.client_cert, self.request.client_key
                )
            if hasattr(ssl, "OP_NO_COMPRESSION"):
                # See netutil.ssl_options_to_context
                ssl_ctx.options |= ssl.OP_NO_COMPRESSION
            return ssl_ctx
        return None

    def _on_timeout(self, info: Optional[str] = None) -> None:
        """Timeout callback of _HTTPConnection instance.

        Raise a `HTTPTimeoutError` when a timeout occurs.

        :info string key: More detailed timeout information.
        """
        self._timeout = None
        error_message = "Timeout {0}".format(info) if info else "Timeout"
        if self.final_callback is not None:
            self._handle_exception(
                HTTPTimeoutError, HTTPTimeoutError(error_message), None
            )

    def _remove_timeout(self) -> None:
        if self._timeout is not None:
            self.io_loop.remove_timeout(self._timeout)
            self._timeout = None

    def _create_connection(self, stream: IOStream) -> HTTP1Connection:
        stream.set_nodelay(True)
        connection = HTTP1Connection(
            stream,
            True,
            HTTP1ConnectionParameters(
                no_keep_alive=True,
                max_header_size=self.max_header_size,
                max_body_size=self.max_body_size,
                decompress=bool(self.request.decompress_response),
            ),
            self._sockaddr,
        )
        return connection

    async def _write_body(self, start_read: bool) -> None:
        if self.request.body is not None:
            self.connection.write(self.request.body)
        elif self.request.body_producer is not None:
            fut = self.request.body_producer(self.connection.write)
            if fut is not None:
                await fut
        self.connection.finish()
        if start_read:
            try:
                await self.connection.read_response(self)
            except StreamClosedError:
                if not self._handle_exception(*sys.exc_info()):
                    raise

    def _release(self) -> None:
        if self.release_callback is not None:
            release_callback = self.release_callback
            self.release_callback = None  # type: ignore
            release_callback()

    def _run_callback(self, response: HTTPResponse) -> None:
        self._release()
        if self.final_callback is not None:
            final_callback = self.final_callback
            self.final_callback = None  # type: ignore
            self.io_loop.add_callback(final_callback, response)

    def _handle_exception(
        self,
        typ: "Optional[Type[BaseException]]",
        value: Optional[BaseException],
        tb: Optional[TracebackType],
    ) -> bool:
        if self.final_callback:
            self._remove_timeout()
            if isinstance(value, StreamClosedError):
                if value.real_error is None:
                    value = HTTPStreamClosedError("Stream closed")
                else:
                    value = value.real_error
            self._run_callback(
                HTTPResponse(
                    self.request,
                    599,
                    error=value,
                    request_time=self.io_loop.time() - self.start_time,
                    start_time=self.start_wall_time,
                )
            )

            if hasattr(self, "stream"):
                # TODO: this may cause a StreamClosedError to be raised
                # by the connection's Future.  Should we cancel the
                # connection more gracefully?
                self.stream.close()
            return True
        else:
            # If our callback has already been called, we are probably
            # catching an exception that is not caused by us but rather
            # some child of our callback. Rather than drop it on the floor,
            # pass it along, unless it's just the stream being closed.
            return isinstance(value, StreamClosedError)

    def on_connection_close(self) -> None:
        if self.final_callback is not None:
            message = "Connection closed"
            if self.stream.error:
                raise self.stream.error
            try:
                raise HTTPStreamClosedError(message)
            except HTTPStreamClosedError:
                self._handle_exception(*sys.exc_info())

    async def headers_received(
        self,
        first_line: Union[httputil.ResponseStartLine, httputil.RequestStartLine],
        headers: httputil.HTTPHeaders,
    ) -> None:
        assert isinstance(first_line, httputil.ResponseStartLine)
        if self.request.expect_100_continue and first_line.code == 100:
            await self._write_body(False)
            return
        self.code = first_line.code
        self.reason = first_line.reason
        self.headers = headers

        if self._should_follow_redirect():
            return

        if self.request.header_callback is not None:
            # Reassemble the start line.
            self.request.header_callback("%s %s %s\r\n" % first_line)
            for k, v in self.headers.get_all():
                self.request.header_callback("%s: %s\r\n" % (k, v))
            self.request.header_callback("\r\n")

    def _should_follow_redirect(self) -> bool:
        if self.request.follow_redirects:
            assert self.request.max_redirects is not None
            return (
                self.code in (301, 302, 303, 307, 308)
                and self.request.max_redirects > 0
                and self.headers is not None
                and self.headers.get("Location") is not None
            )
        return False

    def finish(self) -> None:
        assert self.code is not None
        data = b"".join(self.chunks)
        self._remove_timeout()
        original_request = getattr(self.request, "original_request", self.request)
        if self._should_follow_redirect():
            assert isinstance(self.request, _RequestProxy)
            new_request = copy.copy(self.request.request)
            new_request.url = urllib.parse.urljoin(
                self.request.url, self.headers["Location"]
            )
            new_request.max_redirects = self.request.max_redirects - 1
            del new_request.headers["Host"]
            # https://tools.ietf.org/html/rfc7231#section-6.4
            #
            # The original HTTP spec said that after a 301 or 302
            # redirect, the request method should be preserved.
            # However, browsers implemented this by changing the
            # method to GET, and the behavior stuck. 303 redirects
            # always specified this POST-to-GET behavior, arguably
            # for *all* methods, but libcurl < 7.70 only does this
            # for POST, while libcurl >= 7.70 does it for other methods.
            if (self.code == 303 and self.request.method != "HEAD") or (
                self.code in (301, 302) and self.request.method == "POST"
            ):
                new_request.method = "GET"
                new_request.body = None
                for h in [
                    "Content-Length",
                    "Content-Type",
                    "Content-Encoding",
                    "Transfer-Encoding",
                ]:
                    try:
                        del self.request.headers[h]
                    except KeyError:
                        pass
            new_request.original_request = original_request
            final_callback = self.final_callback
            self.final_callback = None
            self._release()
            fut = self.client.fetch(new_request, raise_error=False)
            fut.add_done_callback(lambda f: final_callback(f.result()))
            self._on_end_request()
            return
        if self.request.streaming_callback:
            buffer = BytesIO()
        else:
            buffer = BytesIO(data)  # TODO: don't require one big string?
        response = HTTPResponse(
            original_request,
            self.code,
            reason=getattr(self, "reason", None),
            headers=self.headers,
            request_time=self.io_loop.time() - self.start_time,
            start_time=self.start_wall_time,
            buffer=buffer,
            effective_url=self.request.url,
        )
        self._run_callback(response)
        self._on_end_request()

    def _on_end_request(self) -> None:
        self.stream.close()

    def data_received(self, chunk: bytes) -> None:
        if self._should_follow_redirect():
            # We're going to follow a redirect so just discard the body.
            return
        if self.request.streaming_callback is not None:
            self.request.streaming_callback(chunk)
        else:
            self.chunks.append(chunk)


if __name__ == "__main__":
    AsyncHTTPClient.configure(SimpleAsyncHTTPClient)
    main()

Recommend

Go 1.1 Release Notes Changes to the standard library Exp and old subtrees moved to go.exp and go.text subrepositories

Go 1.1 Release Notes Changes to the standard library bufio.Scanner

Go 1.1 Release Notes Changes to the implementations and tools Build constraints

Go 1.1 Release Notes Changes to the implementations and tools Changes to the go test command

Go 1.1 Release Notes Changes to the implementations and tools Changes to the go command

Go 1.1 Release Notes Changes to the implementations and tools Unicode

Go 1.1 Release Notes Changes to the implementations and tools Size of int on 64-bit platforms

Go 1.1 Release Notes Changes to the language Method values

Go 1.1 Release Notes Changes to the language Integer division by zero

Go 1.8 Release Notes Changes to the language

Go 1.2 Release Notes Changes to the standard library The text/template and html/template packages

Go 1.2 Release Notes Changes to the standard library The fmt package

Go 1.2 Release Notes Changes to the implementations and tools The go doc command is deleted

Go 1.2 Release Notes Changes to the implementations and tools Test coverage

Go 1.2 Release Notes Changes to the implementations and tools Godoc and vet moved to the go.tools subrepository

Go 1.2 Release Notes Changes to the language Three-index slices

Go 1.2 Release Notes Changes to the language Use of nil

Go 1.5 Release Notes Core library Flag

Go 1.5 Release Notes Tools Trace command

Go 1.5 Release Notes Tools Linker

Go 1.5 Release Notes Tools Assembler

Go 1.5 Release Notes Tools Renaming

Go 1.5 Release Notes Changes to the language Map literals

Go 1.17 Release Notes Tools Vet New warnings for Is, As and Unwrap methods

Go 1.17 Release Notes Tools Vet New warning for calling signal.Notify on unbuffered channels

Go 1.17 Release Notes Tools Go command Pruned module graphs in go 1.17 modules

Debugging Go Code with GDB Tutorial Pretty Printing

Debugging Go Code with GDB Tutorial Inspecting the stack

Debugging Go Code with GDB Tutorial Setting breakpoints

Debugging Go Code with GDB Tutorial Inspecting the source

Debugging Go Code with GDB Tutorial Getting Started

Debugging Go Code with GDB Introduction Go Extensions

Debugging Go Code with GDB Introduction Common Operations

Bootstrap Components Creating your own

Bootstrap Components Responsive

Bootstrap Components Modifiers

Bootstrap Optimize Lean JavaScript Default Exports

Bootstrap Optimize Lean JavaScript

Bootstrap Optimize Lean Sass imports

Bootstrap CSS variables Examples

Bootstrap CSS variables Root variables

Bootstrap Sass Mixins Color schemes

Bootstrap Sass Functions Add and Subtract functions

Bootstrap Sass Functions Color contrast

Bootstrap Sass Functions Colors

Bootstrap Sass Maps and loops Remove from map

Bootstrap Sass Maps and loops Add to map

Bootstrap Sass Maps and loops Modify map

Bootstrap Sass Variable defaults

Bootstrap Sass Importing

Bootstrap Sass File structure