{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":536492655,"defaultBranch":"main","name":"curated-transformers","ownerLogin":"explosion","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2022-09-14T08:54:43.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/20011530?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1713373581.0","currentOid":""},"activityList":{"items":[{"before":"b1929874d35aea2c55b0082aed7638cf7d945b23","after":"3e6180f9a6dfa3ae413b58b25193a2474397c7d6","ref":"refs/heads/v1.3.x","pushedAt":"2024-04-17T15:24:34.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Backport: Fix activation lookup with Python 3.12.3 (#375) (#377)\n\n* Fix activation lookup with Python 3.12.3 (#375)\r\n\r\nWe used the metaclass `EnumMeta`/`EnumType` to override reporting of\r\nmissing enum values (to give the full set of supported activations).\r\nHowever, in Python 3.12.3, the default value of the `name` parameter of\r\n`EnumType.__call__` method was changed from `None` to `_not_given`:\r\n\r\nhttps://github.com/python/cpython/commit/d771729679d39904768f60b3352e02f5f491966c\r\n\r\nEven though this is a public API (which now uses a private default\r\nvalue), it seems too risky to continue using it. So in this change, we\r\nimplement `Enum.__mising__` instead for the improved error reporting.\r\n\r\n* Set version to 1.3.2\r\n\r\n* Adjust two cross-tests for changes in HF transformers (#367)\r\n\r\n* Fix `test_rotary_embeddings_against_hf` for latest transformers\r\n\r\n* xfail test because HfFileSystem is currently broken","shortMessageHtmlLink":"Backport: Fix activation lookup with Python 3.12.3 (#375) (#377)"}},{"before":"e7e7e9dacf5e9013c9d1ed16e7233232369147b4","after":"491b4086cff2873a5f1d293a5176c632ee77b9f8","ref":"refs/heads/v2.0.x","pushedAt":"2024-04-17T15:15:41.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Set version to 2.0.1 (#376)","shortMessageHtmlLink":"Set version to 2.0.1 (#376)"}},{"before":"8debb219a1232a91dd10a55b191d15f4cb2facc4","after":"e7e7e9dacf5e9013c9d1ed16e7233232369147b4","ref":"refs/heads/v2.0.x","pushedAt":"2024-04-17T14:57:58.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Fix activation lookup with Python 3.12.3 (#375)\n\nWe used the metaclass `EnumMeta`/`EnumType` to override reporting of\r\nmissing enum values (to give the full set of supported activations).\r\nHowever, in Python 3.12.3, the default value of the `name` parameter of\r\n`EnumType.__call__` method was changed from `None` to `_not_given`:\r\n\r\nhttps://github.com/python/cpython/commit/d771729679d39904768f60b3352e02f5f491966c\r\n\r\nEven though this is a public API (which now uses a private default\r\nvalue), it seems too risky to continue using it. So in this change, we\r\nimplement `Enum.__mising__` instead for the improved error reporting.","shortMessageHtmlLink":"Fix activation lookup with Python 3.12.3 (#375)"}},{"before":"8debb219a1232a91dd10a55b191d15f4cb2facc4","after":"e7e7e9dacf5e9013c9d1ed16e7233232369147b4","ref":"refs/heads/main","pushedAt":"2024-04-17T14:55:05.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Fix activation lookup with Python 3.12.3 (#375)\n\nWe used the metaclass `EnumMeta`/`EnumType` to override reporting of\r\nmissing enum values (to give the full set of supported activations).\r\nHowever, in Python 3.12.3, the default value of the `name` parameter of\r\n`EnumType.__call__` method was changed from `None` to `_not_given`:\r\n\r\nhttps://github.com/python/cpython/commit/d771729679d39904768f60b3352e02f5f491966c\r\n\r\nEven though this is a public API (which now uses a private default\r\nvalue), it seems too risky to continue using it. So in this change, we\r\nimplement `Enum.__mising__` instead for the improved error reporting.","shortMessageHtmlLink":"Fix activation lookup with Python 3.12.3 (#375)"}},{"before":null,"after":"8debb219a1232a91dd10a55b191d15f4cb2facc4","ref":"refs/heads/v2.0.x","pushedAt":"2024-04-16T09:44:39.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Finalize the API changes for 2.0 (#374)\n\n* `qkv_split` arguments for attention heads are now mandatory\r\n\r\n* Rename `FromHFHub` mixins to `FromHF`\r\n\r\n* Remove `FromHF.convert_hf_state_dict`","shortMessageHtmlLink":"Finalize the API changes for 2.0 (#374)"}},{"before":"c96e565f0594a007c8b9ef28e7eabb129f97bd53","after":"8debb219a1232a91dd10a55b191d15f4cb2facc4","ref":"refs/heads/main","pushedAt":"2024-04-16T07:44:32.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Finalize the API changes for 2.0 (#374)\n\n* `qkv_split` arguments for attention heads are now mandatory\r\n\r\n* Rename `FromHFHub` mixins to `FromHF`\r\n\r\n* Remove `FromHF.convert_hf_state_dict`","shortMessageHtmlLink":"Finalize the API changes for 2.0 (#374)"}},{"before":"e74b860cc33cf1f6865fd228ac78408cbfb0ff97","after":"c96e565f0594a007c8b9ef28e7eabb129f97bd53","ref":"refs/heads/main","pushedAt":"2024-04-15T18:42:55.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Set version to 2.0.0 (#373)\n\nAlso update the `curated-tokenizers` dependency to 2.0.0.","shortMessageHtmlLink":"Set version to 2.0.0 (#373)"}},{"before":"cc01a60cdb6da35f0f85946a48bfa10b272a5665","after":"e74b860cc33cf1f6865fd228ac78408cbfb0ff97","ref":"refs/heads/main","pushedAt":"2024-04-12T13:35:55.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Set version to 2.0.0.dev3, update curated-tokenizers dep to 2.0.0.dev0 (#372)\n\n* Update curated-tokenizers dependency to 2.0.0.dev0\r\n\r\n* Set version to 2.0.0.dev3\r\n\r\n* Set minimum version of Python to 3.9","shortMessageHtmlLink":"Set version to 2.0.0.dev3, update curated-tokenizers dep to 2.0.0.dev0 ("}},{"before":"581386c0c061211792c5bf12c9a7b44440abead3","after":"cc01a60cdb6da35f0f85946a48bfa10b272a5665","ref":"refs/heads/main","pushedAt":"2024-04-10T06:28:15.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Set version to 2.0.0.dev2 (#371)","shortMessageHtmlLink":"Set version to 2.0.0.dev2 (#371)"}},{"before":"7d937cc8590b949cc41053c54d47585316202f5f","after":"581386c0c061211792c5bf12c9a7b44440abead3","ref":"refs/heads/main","pushedAt":"2024-04-09T17:38:09.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Add support for loading parameters in-place (#370)\n\nIn some applications (e.g. spaCy Curated Transformers), we may already\r\nhave constructed the model and we want to load parameters in-place. This\r\nchange adds in-place versions of the `from_*` class methods.\r\n\r\nTo support this properly, the in-place loaders should not need to access\r\nthe configuration anymore. So, the Torch dtype deserialization has moved\r\nfrom the `from_repo` method to the configuration deserialization. To\r\nsupport this, all model configurations now also take a `dtype`\r\nparameter.","shortMessageHtmlLink":"Add support for loading parameters in-place (#370)"}},{"before":"c180a12a8054f7d1ab51a2ad107a6056f02b7a38","after":"7d937cc8590b949cc41053c54d47585316202f5f","ref":"refs/heads/main","pushedAt":"2024-04-08T09:34:51.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Use small ELECTRA model for testing (#369)","shortMessageHtmlLink":"Use small ELECTRA model for testing (#369)"}},{"before":"2d4dfef28139807d5bff5b58f0b333cca4392b16","after":"c180a12a8054f7d1ab51a2ad107a6056f02b7a38","ref":"refs/heads/main","pushedAt":"2024-04-02T12:57:34.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Added ELECTRA as a thin wrapper around BERT (#358)\n\n* duplicate bert functionality for testing\r\n\r\n* Added electra encoder\r\n\r\n* remove temporary file\r\n\r\n* removed un. class method\r\n\r\n* removed added typing to make the pr. cleaner\r\n\r\n* Added test for electra tokenizers\r\n\r\n* formatted using black\r\n\r\n* Update curated_transformers/tests/models/electra/test_encoder.py\r\n\r\nCo-authored-by: Daniël de Kok \r\n\r\n* Ran isort and black\r\n\r\n* revering changes to tests.ym\r\n\r\n---------\r\n\r\nCo-authored-by: Daniël de Kok ","shortMessageHtmlLink":"Added ELECTRA as a thin wrapper around BERT (#358)"}},{"before":"afbdf601d9a278dfcaa9e3330ba5432eb2cf6c5f","after":"2d4dfef28139807d5bff5b58f0b333cca4392b16","ref":"refs/heads/main","pushedAt":"2024-04-02T12:03:49.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Adjust two cross-tests for changes in HF transformers (#367)\n\n* Fix `test_rotary_embeddings_against_hf` for latest transformers\r\n\r\n* xfail test because HfFileSystem is currently broken","shortMessageHtmlLink":"Adjust two cross-tests for changes in HF transformers (#367)"}},{"before":"a52e9db89fe2d653e78c4613b772059767d791d0","after":"afbdf601d9a278dfcaa9e3330ba5432eb2cf6c5f","ref":"refs/heads/main","pushedAt":"2024-02-12T09:24:01.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Set version to 2.0.0.dev1 (#366)","shortMessageHtmlLink":"Set version to 2.0.0.dev1 (#366)"}},{"before":"af59d3f16b39cdf019f6e749b03a5fe4abaeef4f","after":"a52e9db89fe2d653e78c4613b772059767d791d0","ref":"refs/heads/main","pushedAt":"2024-02-12T09:04:13.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Remove support for TorchScript tracing (#361)\n\nWe added support for TorchScript tracing a while back, so that models\r\ncan be exported to ONNX. However, the support relies on metaclasses,\r\nwhich breaks with torch.compile in the latest PyTorch versions. However,\r\nPyTorch now provides a TorchDynamo-based ONNX exporter:\r\n\r\nhttps://pytorch.org/docs/stable/onnx_dynamo.html\r\n\r\nSo it's time to yank TorchScript tracing support and remove all the\r\nfragile dataclass/tuple/dict polymorphism.","shortMessageHtmlLink":"Remove support for TorchScript tracing (#361)"}},{"before":"581316dab071ea289d3f2645c457ea362eb6edc2","after":"b1929874d35aea2c55b0082aed7638cf7d945b23","ref":"refs/heads/v1.3.x","pushedAt":"2024-02-11T11:39:46.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Set version to 1.3.1 (#365)","shortMessageHtmlLink":"Set version to 1.3.1 (#365)"}},{"before":"4055d7e5a12105367ca1c3224bd1e3d1d3e362a8","after":"581316dab071ea289d3f2645c457ea362eb6edc2","ref":"refs/heads/v1.3.x","pushedAt":"2024-02-11T10:43:19.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Ensure that parameters are leaf nodes when loading a model (#364)\n\nThere was a subtle bug where we populate models with parameters that are\r\nnot leaf nodes because we called `to` on them for device placement.\r\n\r\nThis change fixes this issue and validates that all model parameters are\r\nleaf nodes in the model tests.","shortMessageHtmlLink":"Ensure that parameters are leaf nodes when loading a model (#364)"}},{"before":"cd53833048f4c4f7e86ef552a675ea43b20dce3f","after":"4055d7e5a12105367ca1c3224bd1e3d1d3e362a8","ref":"refs/heads/v1.3.x","pushedAt":"2024-02-11T08:03:25.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Set torch upper bound to <2.1.0 (#363)\n\n* Set torch upper bound to <2.1.0\r\n\r\nSome changes in PyTorch 2.1.0 and later are incompatible with\r\nCurated Transformers 1.x. Fixing these issues would require\r\nAPI changes. So we set the upper bound on supported PyTorch\r\nversions. We will soon release Curated Transformers 2.0.0,\r\nwhich is compatible with the lastest PyTorch versions.\r\n\r\n* black","shortMessageHtmlLink":"Set torch upper bound to <2.1.0 (#363)"}},{"before":"f9da3b526dc051573ab2c59281d3373ce2f62d29","after":"af59d3f16b39cdf019f6e749b03a5fe4abaeef4f","ref":"refs/heads/main","pushedAt":"2024-02-08T20:01:27.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Ensure that parameters are leaf nodes when loading a model (#362)\n\nThere was a subtle bug where we populate models with parameters that are\r\nnot leaf nodes because we called `to` on them for device placement.\r\n\r\nThis change fixes this issue and validates that all model parameters are\r\nleaf nodes in the model tests.","shortMessageHtmlLink":"Ensure that parameters are leaf nodes when loading a model (#362)"}},{"before":"23f3a1b4ec821dc8e3c22ba91a4d46b40669f467","after":"f9da3b526dc051573ab2c59281d3373ce2f62d29","ref":"refs/heads/main","pushedAt":"2024-02-08T19:29:48.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Clear output of Torch SDPA for masked pieces (#360)\n\n* Clear output of Torch SDPA for masked pieces\r\n\r\nSince Torch 2.1, the Torch memory-efficient SDPA GPU kernel returns NaN\r\nfor pieces that are completely masked out. This leads to NaN propagation\r\nin the next attention layer, because masked pieces get an attention of\r\nzero, but zero times NaN is still NaN.\r\n\r\nIn this we fix this by setting masked tokens to zero to clear out any\r\nNaNs.\r\n\r\nWe currently rely on the query dimension of the mask to be singular, but\r\nin the future we should probably redesign the `AttentionMask` class to\r\naccount for the differences between attention masks and causal masks.\r\n\r\n* black\r\n\r\n* Update MyPy version to one that supports recent PyTorch\r\n\r\n* Comment typos and fixes\r\n\r\n* Add assertion message\r\n\r\nCo-authored-by: Madeesh Kannan \r\n\r\n* black\r\n\r\n---------\r\n\r\nCo-authored-by: Madeesh Kannan ","shortMessageHtmlLink":"Clear output of Torch SDPA for masked pieces (#360)"}},{"before":"88d4c391afcb7a1de2c3d8271dd8a46392200b4d","after":null,"ref":"refs/heads/dash-test","pushedAt":"2023-12-01T11:53:52.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"shadeMe","name":"Madeesh Kannan","path":"/shadeMe","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/214450?s=80&v=4"}},{"before":"7a436e0b4da9fe8f22e31fc799c13ece3798df0b","after":"23f3a1b4ec821dc8e3c22ba91a4d46b40669f467","ref":"refs/heads/main","pushedAt":"2023-12-01T11:48:40.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"shadeMe","name":"Madeesh Kannan","path":"/shadeMe","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/214450?s=80&v=4"},"commit":{"message":"Add curated_transformers.__version__ and use it for doc generation (#357)","shortMessageHtmlLink":"Add curated_transformers.__version__ and use it for doc generation (#357"}},{"before":"9923b95ddf2c1372258092204bb08fc91735383e","after":"88d4c391afcb7a1de2c3d8271dd8a46392200b4d","ref":"refs/heads/dash-test","pushedAt":"2023-12-01T10:39:48.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Add curated_transformers.__version__ and use it for doc generation","shortMessageHtmlLink":"Add curated_transformers.__version__ and use it for doc generation"}},{"before":"01e947aa2c1a4e809e0181adf833be4e11586f44","after":"9923b95ddf2c1372258092204bb08fc91735383e","ref":"refs/heads/dash-test","pushedAt":"2023-12-01T10:34:22.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Add curated_transformers.__version__ and use it for doc generation","shortMessageHtmlLink":"Add curated_transformers.__version__ and use it for doc generation"}},{"before":"24fcd1597ece9991cd3b141438c16f870cc54dbd","after":"01e947aa2c1a4e809e0181adf833be4e11586f44","ref":"refs/heads/dash-test","pushedAt":"2023-12-01T10:11:25.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Use versioneer to generate versions from metadata + git\n\nAlso use the version number in the Sphinx configuration to complete\nmetadata.","shortMessageHtmlLink":"Use versioneer to generate versions from metadata + git"}},{"before":null,"after":"24fcd1597ece9991cd3b141438c16f870cc54dbd","ref":"refs/heads/dash-test","pushedAt":"2023-12-01T09:48:47.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Check if version is the missing metadata","shortMessageHtmlLink":"Check if version is the missing metadata"}},{"before":"fa492b255ebb3f5b47918ed2d33f02e225f41c18","after":"7a436e0b4da9fe8f22e31fc799c13ece3798df0b","ref":"refs/heads/main","pushedAt":"2023-11-08T15:22:01.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"svlandeg","name":"Sofie Van Landeghem","path":"/svlandeg","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/8796347?s=80&v=4"},"commit":{"message":"Add support for file write/upload operations with `HfHubRepository` (#354)\n\n* Add support for file write/upload operations `HfHubRepository` (as\r\ntransactions or otherwise)\r\n\r\n* Use `try..finally` to remove temp files\r\n\r\n* Add warning to `FsspecTransactionContext` that it's currently a noop\r\n\r\n* Add `upload-tests` marker for repository tests that require upload permissions\r\n\r\n* Remove dead code\r\n\r\n* Load `HFTokenizer` from local cache without extra overhead\r\n\r\n* `isort`\r\n\r\n* Fix test marker name typo\r\n\r\n* Fix more typos\r\n\r\n* `isort` again\r\n\r\n* Remove unused variable","shortMessageHtmlLink":"Add support for file write/upload operations with HfHubRepository (#…"}},{"before":"dfe6d965ab9840cb824c3e1debed48f104ae2caf","after":"fa492b255ebb3f5b47918ed2d33f02e225f41c18","ref":"refs/heads/main","pushedAt":"2023-11-07T17:06:03.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"shadeMe","name":"Madeesh Kannan","path":"/shadeMe","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/214450?s=80&v=4"},"commit":{"message":"Add support for converting Curated Transformer configs to Hugging Face compatible configs (#333)\n\n* Add support for converting Curated Transformer configs to Hugging Face compatible configs\r\n\r\n* Remove `to` method from `FromHFHub` since it breaks the usage of `Generic` as a superclass\r\n\r\n* Fix import\r\n\r\n* `isort`\r\n\r\n* Workaround for Python 3.8\r\n\r\n* Add comment about the Python 3.8 workaround\r\n\r\n* Fixes for Falcon","shortMessageHtmlLink":"Add support for converting Curated Transformer configs to Hugging Fac…"}},{"before":null,"after":"9ec337176f493f6704819c78ca30723ad85922e2","ref":"refs/heads/sinusoidal","pushedAt":"2023-10-13T13:34:59.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"Add basic support for BERT/RoBERTa with sinusoidal embeddings","shortMessageHtmlLink":"Add basic support for BERT/RoBERTa with sinusoidal embeddings"}},{"before":"3c250972f25653eaa18373c463e25f96e55c6be1","after":"dfe6d965ab9840cb824c3e1debed48f104ae2caf","ref":"refs/heads/main","pushedAt":"2023-10-05T12:08:48.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"danieldk","name":"Daniël de Kok","path":"/danieldk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/49398?s=80&v=4"},"commit":{"message":"AutoModel: let models check if the configuration is supported (#352)\n\n* AutoModel: let models check if the configuration is supported\r\n\r\nThis will allow us to split up more complex models like Falcon into\r\nmultiple classes and corresponding entry points.\r\n\r\n* Doc fix\r\n\r\nCo-authored-by: Madeesh Kannan \r\n\r\n---------\r\n\r\nCo-authored-by: Madeesh Kannan ","shortMessageHtmlLink":"AutoModel: let models check if the configuration is supported (#352)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEMz6LVgA","startCursor":null,"endCursor":null}},"title":"Activity · explosion/curated-transformers"}